Dark Mode On/Off

# Big Data Online Practice Test - 12

This Test will cover complete Big Data with very important questions, starting off from basics to advanced level.
Q. A lead was explaining,the concept of Erasure coding,in Hadoop 3.0,to his team.
Suppose,there are two data blocks DB1 and DB2, of a file.
Both of them will be replicated like -
DB1 will have DB1.1 and DB1.2.
DB2 will have DB2.1 and DB2.2.
DBp is a single parity block.
Consider the following statements.
1) The third copy DBp, is the result of XOr,that is DBp = (DB1.1 xor DB2.1).
2) The third copy DBp, is the result of OR,that is, DBp = (DB1.1 or DB2.1).
3) In case of failure of DB1.1, the copy will be first recovered,from Parity block as DB1.1 = DBp xor DB2.1.
4) In case of failure of DB1.1, the copy will be first recovered,from DB1.2,and,then from parity block.
Mark the correct option.
 A. Only 2nd statement is correct B. 2nd and 4th statements are correct C. 1st and 3rd statements are correct D. Only 1st is correct. Rest others are wrong
Q. Consider the following situation. If the Hive query was:
select * from ports sort by numberofships;
On execution, if the two reducers,get the following rows,containing the numberofships,values as Reducer 1 - 12,24,8
Reducer 2 - 6, 16, 14.
What is the overall output produced?
 A. Rows,having number of ships, in the order - 8,12,24,6,14,16 B. Rows,having number of ships, in the order - 6,8,12,14,16,24 C. Rows,having number of ships, in the order - 24,12,8,16,14,6. D. Rows,having number of ships, in the order - 24,16,12,14,8,6.
Q. A developer was told to display,all the Countries from Travel collection,where VisaOnArrival was null.
Mark the correct option.
 A. db.Travel.find({VisaOnArrival:{\$is:null}}); B. db.Travel.find({VisaOnArrival:{\$eq:null}}); C. db.Travel.find({VisaOnArrival:{\$regex:null}}); D. db.Travel.find({VisaOnArrival:{\$like:null}});
Q. Below are a few statements,about many clients,writing into a HDFS file,simultaneously.
1) When a Namenode gives permission,to a client,to write to a file,then only that client,can start working on the file.
2) Namenode can grant access,to two clients, to write to the same file, at a time.
3) When a file is opened for writing,in HDFS by a client, the Datanode gives it the required permission.
4) Namenode allows access to the first client.The second client, gets access,from the datanode.
Group which of these are correct(C) or incorrect(InC).
 A. C-4 & InC - 1,2,3,5. B. C-3 & InC - 1,5,2,4. C. C-2,4 & InC - 1,3,5. D. C-1,5 & InC - 2,3,4.

Q. Following statements are written about YARN.
1) Yarn is a replacement for Mapreduce in Hadoop2.0.
2) Yarn handles,the scheduling and monitoring of jobs,in Hadoop2.0.
3) Mapreduce runs,on top of the YARN architecture.
4) YARN has a resource manager,for each cluster.
Mark the correct option.
 A. Except 3rd, all statements are valid B. Except 2nd, all statements are invalid C. Except 1st, all statements are valid D. Except 4th, all statements are valid
Q. Consider the following code snippet:
``````
var A =Set(1,2,3);
var B =Set(4,5,6);
``````
What is the output of A.&(B)?
 A. Set() B. Set(1,2,3,4,5,6); C. Set(5,7,9); D. Set(6,5,4,3,2,1);
Q. Below are a few features mentioned.Mark,if they are of HBase or Impala or Both,support it.
1) The implementation language is C++.
2) There is no support of SQL.
3) Offers APIs such as Thrift, Restful HTTP API.
4) Supports sharding method for storing data on different nodes.
Mark the correct option.
 A. Impala, Impala, Both, HBase. B. Both, HBase, Impala, Impala C. HBase, Impala, Both, Both D. Impala, HBase, HBase, Both
Q. In a quiz, it was told,to mark the statements,related to Pig framework,as True or False.
1) PigStorage() is a case sensitive function.
2) The default mode of Pig is local mode.
3) Order by command is not used for sorting.
4) Pig Latin is used to specify the data flow.
Choose the appropriate option.
 A. True, True, False, False B. True, False, False, True C. False, False, True, True D. True, True, True, False
Q. Which Framework Am I?
I carry massive amounts of data,from many sources to a centralized store.
I am very much reliable,distributed and configurable!
I was mainly developed,to collect streaming data,from web servers to HDFS.
I allow data collection,in batch as well as streaming mode.
Mark the correct option.
 A. Kafka B. Sqoop C. Flume D. Ambari

Q. Read the following statements regarding Sqoop imports.
1) Sqoop jobs,have 1 default reduce task.
2) Number of mappers,can be modified,by passing --n-mappers argument,to the job.
3) The maximum limit,of number of mappers,set by Sqoop,is 10.
4) More number of mappers,means,much efficient performance.
Mark the correct option.
 A. Except 1st, all statements, are valid B. All the statements are invalid C. Except 3rd,all statements are valid D. Only 2nd and 4th statements are correct