java - Can I write multiple byte arrays to an HttpClient without client-side buffering? -
java - Can I write multiple byte arrays to an HttpClient without client-side buffering? -
the problem
i upload big files (up 5 or 6 gb) web server using httpclient
class (4.1.2) apache. before sending these files, break them smaller chunks (100 mb, example). unfortunately, of examples see doing multi-part post using httpclient
appear buffer file contents before sending them (typically, little file size assumed). here such example:
httpclient httpclient = new defaulthttpclient(); httppost post = new httppost("http://www.example.com/upload.php"); multipartentity mpe = new multipartentity(); // here plain-text fields part of our multi-part upload mpe.addpart("chunkindex", new stringbody(integer.tostring(chunkindex))); mpe.addpart("filename", new stringbody(somefile.getname())); // file include; looks we're including whole thing! filebody bin = new filebody(new file("/path/to/myfile.bin")); mpe.addpart("myfile", bin); post.setentity(mpe); httpresponse response = httpclient.execute(post);
in example, looks create new filebody
object , add together multipartentity
. in case, file 100 mb in size, i'd rather not buffer of info @ once. i'd able write out info in smaller chunks (4 mb @ time, example), writing 100 mb. i'm able using httpurlconnection
class java (by writing straight output stream), class has own set of problems, why i'm trying utilize apache offerings.
is possible write 100 mb of info httpclient, in smaller, iterative chunks? don't want client have buffer 100 mb of info before doing post. none of examples see seem allow write straight output stream; appear pre-package things before execute()
call.
any tips appreciated!
--- update ---for clarification, here's did httpurlconnection
class. i'm trying figure out how similar in httpclient
:
// connection's output stream out = new dataoutputstream(conn.getoutputstream()); // write plain-text multi-part info out.writebytes(fieldbuffer.tostring()); // figure out how many loops we'll need write 100 mb chunk int bufferloops = (datalength + (buffersize - 1)) / buffersize; // open local file (~5 gb in size) read info chunk (100 mb) raf = new randomaccessfile(file, "r"); raf.seek(startingoffset); // position pointer origin of chunk // maintain track of how many bytes have left read chunk int byteslefttoread = datalength; // write file info block output stream for(int i=0; i<bufferloops; i++) { // create appropriately sized mini-buffer (max 4 mb) pieces // of chunk have yet read byte[] buffer = (byteslefttoread < buffersize) ? new byte[byteslefttoread] : new byte[buffersize]; int bytes_read = raf.read(buffer); // read ~4 mb local file out.write(buffer, 0, bytes_read); // write bit stream byteslefttoread -= bytes_read; } // write final boundary out.writebytes(finalboundary); out.flush();
if i'm understanding question correctly, concern loading whole file memory (right?). if case, should employ streams (such fileinputstream). way, whole file doesn't pulled memory @ once.
if doesn't help, , still want split file chunks, code server deal multiple posts, concatenating info gets them, , manually split bytes of file.
personally, prefer first answer, either way (or neither way if these don't help), luck!
java httpclient multipartentity
Comments
Post a Comment