Posted Date : 07th Mar, 2025
Peer-Reviewed Journals List: A Guide to Quality Research Publications ...
Posted Date : 07th Mar, 2025
Choosing the right journal is crucial for successful publication. Cons...
Posted Date : 27th Feb, 2025
Why Peer-Reviewed Journals Matter Quality Control: The peer revie...
Posted Date : 27th Feb, 2025
The Peer Review Process The peer review process typically follows sev...
Posted Date : 27th Feb, 2025
What Are Peer-Reviewed Journals? A peer-reviewed journal is a publica...
An Overview of the DFS on HDFS
Author Name : Dr PL Pradhan, Shrihari Dillip Khatawkar
ABSTRACT The compressive analysis of this study emphasizes the distributed file system on Big Data for Cloud and EDGE computing. The author has to deeply study the action and reaction of the Hadoop distributed file system (HDFS) to achieve the role, responsibilities, functionalities, and operation for data collecting, storing, processing, generation, distribution, storage, retrieval, analysis, and maintenance of multi-source data for the better and faster analytics process. Our proposed research activities are that HDFS makes more economical, benchmarking, fault-tolerant, scalabilities, reliabilities, high availability, replication, and synchronization is designed to be deployed on low-cost hardware, software, and network for heterogeneous data sets. More our objective is to optimize, normalize, and standardize the data and resources to review the end user data and information without fault, because written once and read many more (WORM) times by stakeholders. The author has to protect the valuable asset through the attributes of Read and Execute/Review the file system access control mechanism which would be a great handshake with BCP and DRP for large-scale businesses to support the EDGE Computing. The high-end technique performing standardizes, optimizes, generalize and normalized the heterogeneous data as per need analysis.