Hadoop. About file creation in HDFS -



Hadoop. About file creation in HDFS -

i read that whenever client needs create file in hdfs (the hadoop distributed file system), client's file must of 64mb. is true? how can load file in hdfs less 64 mb? can load file reference processing other file , has available datanodes?

i read whenever client needs create file in hdfs (the hadoop distributed file system), client's file must of 64mb.

could provide reference same? file of size can set hdfs. file split 64 mb (default) blocks , saved on different info nodes in cluster.

can load file reference processing other file , has available datanodes?

it doesn't matter if block or file on particular info node or on info nodes. info nodes can fetch info each other long part of cluster.

think of hdfs big hard drive , write code reading/writing info hdfs. hadoop take care of internals 'reading from' or 'writing to' multiple info nodes if required.

would suggest read next 1 2 3 on hdfs, 2nd 1 comic on hdfs.

hadoop hdfs

Comments

Popular posts from this blog

How do I check if an insert was successful with MySQLdb in Python? -

delphi - blogger via idHTTP : error 400 bad request -

postgresql - ERROR: operator is not unique: unknown + unknown -