This project has moved and is read-only. For the latest updates, please go here.

FTP

Mar 30, 2011 at 6:09 PM

Network configuration issues for FTP - Client or server - I'm not Blatantly stealing this FAQ from FileZilla; So here's a link instead :-)
http://wiki.filezilla-project.org/Network_Configuration

Mar 30, 2011 at 6:11 PM

A cool Management app to look like this would be good: http://www.pablosoftwaresolutions.com/html/quick__n_easy_ftp_server_lite.html. Might be a good excuse to try those WPF widgets :-)

Mar 31, 2011 at 9:02 PM

I havebeen looking at the security stuff for the user permissions. Seems that in order to get FTP to initially communicate a user / password pair have to be sent "in the Clear".

Quote from an MS Site:
It is important to remember that, by default, credentials sent to the FTP server are sent in clear text. This can present a security risk, especially for FTP connections that are made over the Internet.

Apr 1, 2011 at 7:56 PM
Edited Apr 1, 2011 at 8:47 PM

Speed test of just a copy via explorer from Win 7 x64 to Win 7 x32 via NetDrive to the LiquesceFTPSvc without any drive pooling

The Bad news is that TeraCopy V 2.2b2 only managed 8MB/s which is a bit poor !

BUT.. Copying to the same drive via the windows share returns a result of 65MB/S in explorer and 74 MB/s with Teracopy.
Drive to drive copy on the Win 7x32 (The FTP Host) machine give between 80 -> 88 MB/s with teracopy, So the windows share / network Overhead is pretty low and is probably down to good buffer negotiation.
Trying to play a large file (18GB) via the Explorer Web folder did not work. Via the NetDrive suffered buffering (Probably due to the small transfer packet size out of the service).
Ran a download test with BitKinex and it was getting 34MB/s, but I am not sure how it intergrates into windows explorer to allow MediaPlayers etc.

Apr 2, 2011 at 5:31 PM

It seems that Netdrive has been limited for the copy out of the mapped drive.. As seen above copy from client through NetDrive to the target gets 44, bit copy off Server through Netdrive to the client only gets 1->2MB (Via terracopy of Explorer). Copy via the Webfolder gets 18MB/s through explorer. The problem is that Media player will not use the Webfolder without donwloading the whole file first, and 2 MB/s is not enough for HD mkv playback.

Apr 5, 2011 at 8:21 PM
Edited Apr 5, 2011 at 8:31 PM

So the latest Changest and the use of Network streams gives WS_FTP LE a speed of 55MB/s

And it's log states this:

125 Connected, Starting Data Transfer.
226 Transfer Completed.
transferred 14756199007 bytes in 255.995 seconds, 450332.252 Kbps ( 56291.531 Kbps), transfer succeeded.
Transfer request completed with status: Finished

Edit:
Just performed the write and I get this (Which beats Explorer by a few MB/s and is the equivalent of writing to the direct share [See above]!)

226 Transfer Complete.
transferred 14756199007 bytes in 215.533 seconds, 534874.345 Kbps ( 66859.293 Kbps), transfer succeeded.
Transfer request completed with status: Finished

Apr 6, 2011 at 9:48 AM
Edited Apr 6, 2011 at 9:48 AM

Thank you for all your work.

Hopefully this will lead to something good eventually.

Good enough for "some people" to steal it. ;)

 

Apr 6, 2011 at 5:10 PM
NLS wrote:
Thank you for all your work.
Hopefully this will lead to something good eventually.

Thanks.. It seems that I may still have to resort to using dokan for allowing the local clients to have full Explorer functionality to keep the speed and random access parts working. That will be a secondary task after I have the full throttle FTP working with the Pool architecture behind it.

So A question.. Does anyone have any experience with using a plugin / driver / service; that maps a drive into explorer, that uses FTP and actually works with some speed ?

Apr 6, 2011 at 5:51 PM
Edited Apr 6, 2011 at 5:57 PM

A little more info on NetDrive (Free) when copying from FTP:

  • It keeps forcibly closing the data connection and then restarting the connection
  • Over a Gig network, the max that can be achieved is 2MB/s
  • On local loopback (127.0.0.1) the max is 8.0 MB/s
  • It is worse with TeraCopy as it has to resynch for each aborted packet

In Summary, It's great for sending, not so for retrieval :-(

Edit:
So.. Lets hope the promised "DriveMaker for public downloads" will be better and save me a lot of pain with Dokan..

Apr 7, 2011 at 12:43 AM

http://en.wikipedia.org/wiki/Slow-start

I've been experiencing slower TCP/IP performance in some tests recently, of around 20MB/s (even over the loopback device) but then it jumps up to 110MB/s after a while, which is more like what I expect it to be.  The test program I've been using has this issue each time a new connection is made (Shares in Windows keep one socket open the whole time I think, so that's why I don't see this issue there, not to mention any trickery MS is doing to hide the normal slow-start behaviour) but I'm sure FTP will close and open sockets as needed, so may be affected...

I've not yet found a solution, although I've played with the "netsh interface tcp set global autotuning=disabled":

http://www.sysprobs.com/windows-7-network-slow

http://smallvoid.com/article/windows-tcpip-settings.html

And some more items in SG TCP Optimizer:

http://www.speedguide.net/downloads.php

Not sure if this helps!  But those numbers look too low to be simply due to poor performing FTP algorithms.

Apr 7, 2011 at 7:39 AM

More info: the 20MB/s was with a socket buffer size of 64KB (I.E. read a 64KB chunk from disk and pass it all to the socket in one go) and I've managed to get this up to 77MB/s using a 256KB buffer.  My test app. crashes with 512KB, because it's multithreaded and my createthread calls are using the default stack size...  ATM I really can't be bothered re-engineering it to dynamically allocate memory rather than use the stack.  So anyway, I'm happy with between 77MB/s and 110MB/s - since this is about what I'd expect (as the disks I'm using run as slow as 50MB/s at the "slow end").

For the record, I did play with this sort of thing about 15 years ago - and found with Win95/NT4 that many network cards would not accept more than 4KB in a single call to send (socket) - but this isn't a problem anymore it seems (using Windows 7).  How things change over time.

One last thing, when I tried a 4KB buffer and this "slow-down thingy" occured, it was really slow.  I could see the file "being sent" slowly increasing in size - which suggests something is pausing individual packets from being transmitted...  Odd stuff.

Apr 7, 2011 at 8:25 PM
Edited Apr 7, 2011 at 8:39 PM
jaso wrote:
One last thing, when I tried a 4KB buffer and this "slow-down thingy" occured, it was really slow.  I could see the file "being sent" slowly increasing in size - which suggests something is pausing individual packets from being transmitted...  Odd stuff.

If you have the naggle algorithm off, and have the default WindowsSize of 64K, then that's a lot of wasted space.

That is why when I converted to using the Network stream I got a significant boost with the clients that can use the TCP channels properly, in that the full MTU / Window size was being used at full speed, All I had to do was to keep filling the stream bucket at the FTP end, and the client just absorbed the contents at the drive -> FTP Svc <-> Network -> FTP Client -> Drive speeds.

Edit BTW:
checkout [release:63591] It's the code that produced those really impressive results above (Compared to the Explorer Copy / Write speeds to a direct share).

Apr 8, 2011 at 12:26 PM
Edited Apr 9, 2011 at 3:56 AM

But that's sort of my point, in that turning off all these things hasn't really changed the slow(er) behaviour.  Only forcing more data "at once" into the socket has made any difference, for which the magnitude of change needed makes no real sense.

Also, I don't have a window size of 64KB - the MTU is set to 9KB (though I've also tested it unmodified) on the PCIe network card in both machines, 4KB on the PCI network cards (these machiens each have two built-in network devices and are connected with 2 crossover cables - I had also tried combining the ports with each of the vendors teaming software, but although it says 2Gb it's not really - unless you are using different/more sockets in the one transfer, but again I didn't want to engineer a file-reader from both ends style mechanism to do super-fast transfers) and the loopback device is stock standard (assumed MTU of 1.5KB, but who knows...).

The 64KB buffer I mentioned was just how much disk IO takes place between "send" commands to the socket and thus also indicates how much data is passed in one socket call.  Since Windows buffers disk IO in 256KB blocks - changing IO from 256KB to 4KB shouldn't add a huge (or frankly even small) overhead to "getting the data" into app. buffers.  These are fast (CPU) machines and don't show much overhead difference between those tests in task manager.

If I use Windows Shares, and because one machine transfers from a SSD drive with ~230MB/s read performance, it states the transfer is running at 125MB/s - since Windows only tells you how fast the transmit buffers fill up...  I know it's really about 110MB/s though.  I also know the target drive (a Seagate SATAIII 2TB disk) has a write performance at the start of the disk of ~140MB/s - and ~70MB/s at the slower end - and I'm using the fast end (it's almost blank!) at the moment for these tests.

So, I guess all I'm saying is, when it comes to testing performance - the funny things going on with Windows Networking that slow down throughput (which are reported everywhere on the net also!) are more of a concern for the numbers you are describing.  Oh, and also that I'd be a happy chappy if someone could tell me exactly what it is that needs to be changed to make these sockets run at full speed from the instant they are opened!

(EDIT: just changed KB to MB where I typed the wrong thing.)

Apr 8, 2011 at 1:00 PM
jaso wrote:
1) The 64KB buffer I mentioned was just how much disk IO takes place between "send" commands to the socket and thus also indicates how much data is passed in one socket call.  Since Windows buffers disk IO in 256KB blocks - changing IO from 256KB to 4KB shouldn't add a huge (or frankly even small) overhead to "getting the data" into app. buffers.  These are fast (CPU) machines and don't show much overhead difference between those tests in task manager.
2) So, I guess all I'm saying is, when it comes to testing performance - the funny things going on with Windows Networking that slow down throughput (which are reported everywhere on the net also!) are more of a concern for the numbers you are describing.  Oh, and also that I'd be a happy chappy if someone could tell me exactly what it is that needs to be changed to make these sockets run at full speed from the instant they are opened!

1) The 64K that I mentioned was from the C# sockets variable statingthat it had defaulted to a window size of 64K on Windows 7, regardless of the MTU etc.

2) I run my tests over the full 18GB, thus negating any speed-up, slow down, collision algorithm stuff.

Note: HDD's (espeically SATAIII's) have a buffer that scews the initial buffered results. Add this to the internal buffering windows is doing as well, you end up with the initial MB's being cached and written into both the Windows "Spare" memeory buffer and then streamed into the HDD buffer, before it even starts to write to a drive platter, that's another reason why I choose a file that is far greater than the size of memory that windows could possibly use in my setups.

@Jaso: What S/w was the FTP server you are using for your homebrew client tests ?

Apr 8, 2011 at 3:24 PM

It's not FTP, just plain old "C" sockets that happen to do what FTP would normally do (I've got source to a basic example) in the most basic mode (pasv is more complicated).  I.E. read, send, repeat until all data is sent down the socket - there's no scope for improving performance in the code here except to change the buffering.

Yeah, the files I'm transferring aren't small either, each of the many transfers I test (remember I open a new socket for each file for these tests) are 2GB or larger in size..  Overall there'll be 20GB or so of data there, but it's the restarting that I was interested in.  In addition, the reported MB/s are (re)calculated over every 2 seconds, so settle down right away once buffers are full (which happens in the first second).  And the possible error in calculation here is only a small multiple of the buffer size in use.

I've modified the new thread calls to allocate 2MB stacks now - and using a 512KB buffer (it no longer crashes) it gives a perfectly constant 107MB/s all the time now, at least after the first second while hardware buffers are making things faster.

Without going overboard (meaning 512KB chunks shouldn't be required to get 100% utilisation from a Gbe network, given the machine isn't doing anything else at the time and can easily service the hardware when required) these transfers range from 20MB/s up to 110MB/s (so even with small 32KB buffers, I can get the full 110MB/s [107MB/s is probably more likely to be the exact value] that's expected, it's just I don't get it all the time).  Following on from my point, if you are not getting anywhere near this, where your numbers say: 2MB/s to 8MB/s, then I think what I've just had to deal with may also be having an effect on your tests.  And just to be contentious I don't think taking the full 18GB/over-time does in fact "negate" speed-up/slow-down or collisions (though I have very-very few of those, given it's a crossover cable and the target machine isn't sending anything significant back to the source machine) but rather reports exactly the result of such behaviour.

Are you able to repeat your tests with a really large buffer?  (mine now has 512KB on both "sides", though you might be right that it only needs to be on the senders side of things?) (I ask because I don't know how much of the software you're using is configurable in this way.)

Apr 9, 2011 at 12:53 AM
Edited Apr 9, 2011 at 4:08 AM

Finally I know what's causing it - and it's not TCP/IP (well, not entirely)...  When I woke this morning I decided to try and improve performance (which I thought couldn't be done, but...) by creating a reader thread and a sender thread (using 2 alternating buffers, now tested with both 64KB and 512KB sizes) and the difference between these tests are ~105MB/s (64KB) and ~111MB/s (512KB) and seems to be quite reliable now.

From this, I now know Windows isn't always doing "read ahead" IO, so the readfile call is blocking far more than I expected.  This in turn causes the TCP/IP subsystem to give up waiting for more data and start to process buffers as-is.  Only after a short time does Windows decide to start making performance decisions in the file read operations and my transfer speed increases - or perhaps TCP/IP too is deciding to wait just a little longer based on past socket use.  By using a really large buffer (512KB) *in the old tests* I'm reducing the amount of times this can occur, but still isn't optimal (hence I got 107MB/s which isn't quite 100% of what is achievable) and it would now seem to be a sledgehammer solution for an inelegant design.

From the 64KB multithreaded test, it can be seen that there's still times when one of the two required buffers isn't quite ready when needed from the 'other thread' but it's quite rare - when using 512KB buffers (and knowing that file reading is always faster in these tests on my setup than TCP/IP is) I'd guess that a buffer is always available when needed for TCP/IP to (edit: "be") 100% utilised.

I wonder how many other (FTP) solutions out there don't bother to use a design like this (or even a large circular buffer - which was another perhaps better way I was tossing up on implementing.)  The various FTP samples I've seen don't do anything like this!

Apr 9, 2011 at 11:16 AM
jaso wrote:
If I use Windows Shares, and because one machine transfers from a SSD drive with ~230MB/s read performance, it states the transfer is running at 125MB/s - since Windows only tells you how fast the transmit buffers fill up...  I know it's really about 110MB/s though.  I also know the target drive (a Seagate SATAIII 2TB disk) has a write performance at the start of the disk of ~140MB/s - and ~70MB/s at the slower end - and I'm using the fast end (it's almost blank!) at the moment for these tests.

So the target you have been aiming for with your code, is to match what Windows can provide machine to machine in code.

What I demonstrated was that I exceeded that with my setup, And looking at your multi-threaded number I see that you exceeded yours as well.

The point of this, Is that some people have been stating that FTP is SLOW, but it doesn't have to be, IF you have the right Server AND the right client.

It's just that I haven't found a client that integrates into Windows explorer that gives both reasonable write and read performance. So I will have to see if I can write one that does, or get a volunteer to help :-)

Apr 9, 2011 at 3:12 PM
Edited Apr 9, 2011 at 3:13 PM

Yes, for my use (this stuff is being integrated into the backup software source code I use, which I'm trying to automate now) I want it to perform at its best, if the hardware supports it...  I'm not entirely sure I've exceeded what Windows provides, but I know it's at least been matched - that is, with fast source and destination disks (the 2 fast blank Seagates I'm *now* using in these tests) however, from my tests I now suspect if the drive isn't supplying greater throughput than what the network can do, then there may be degradation which is unavoidable.  I.E. I know reading the files in 64KB chunks still gets more than 110MB/s but it doesn't translate to that through the network where I get 105MB/s.  So if the drive is getting full(er) (and, so, slower at the inner tracks of the disk where its performance is around half of network speed - usually 50~70MB/s) then I now suspect the small "breaks" (in buffer fullness) will make the network *even* slower than is expected, just like my original 4KB test, where it really dies in the arse!

But with what I've learnt I totally agree with you, there's no reason FTP can't do fast transfers...  However, I'm not "expert" enough [yet] to comment on whether both client and server needs to be right ( / "perfect") (except in that the direction of data "pushing" would suggest this is true) since I've not tested if the receiver buffers make any difference...  After all, file writes do not block in Windows - unless there's a good reason - like IO being saturated, which I'm happy (and expect) to pause for.  All my tests are using a single program (acts as both client and server) and when I change the buffer size, it changes it for everything.  Socket "recv" will also not block - if there's plenty of data ready to read too.

I did see a sustained 112.5MB/s for quite a few seconds (maybe 5 or 6 in a row) today with one test, but it dropped again to 111 before the file had completed transferring...  Maybe it was just lucky, all the bits in the puzzle working together well at that particular point.

Apr 21, 2011 at 5:11 PM
Edited Apr 23, 2011 at 9:34 AM

I have found another client that provides FTP into an explorer drive letter - And it's free : "Cloud Desktop - Starter Edition"

"http://www.gladinet.com/c/index.php/gladinet-products-services/"

Edit:
Seems like the latest version has a probelm with copying Large files (used the 14GB) from FTP servers. Otherwise it was really good and fast.

Jun 23, 2011 at 5:01 PM
Edited Jun 23, 2011 at 5:02 PM

Would really like the FTP manager to look like this, but without all of the tabs (That would require a lot of background work and several releases :-)