Play Video
1
Cluster Computing and MapReduce Lecture 3
Cluster Computing and MapReduce Lecture 3
::2007/08/28::
Play Video
2
Google File System - parte 1
Google File System - parte 1
::2013/08/16::
Play Video
3
Google File System (GFS o GooFS)
Google File System (GFS o GooFS)
::2012/05/11::
Play Video
4
Hadoop Tutorial: Intro to HDFS
Hadoop Tutorial: Intro to HDFS
::2012/10/31::
Play Video
5
El Google File System
El Google File System
::2012/05/30::
Play Video
6
Chrome Apps Office Hours: Synchronized File System
Chrome Apps Office Hours: Synchronized File System
::2013/03/13::
Play Video
7
28 File System Architecture - 2
28 File System Architecture - 2
::2013/04/10::
Play Video
8
Google File System - parte 2
Google File System - parte 2
::2013/08/16::
Play Video
9
Google File System - parte 3
Google File System - parte 3
::2013/08/16::
Play Video
10
Optimal Flash Provisioning for Cloud Storage Workloads in Google
Optimal Flash Provisioning for Cloud Storage Workloads in Google's Colossus File System
::2014/06/23::
Play Video
11
The Linux File System - Explained
The Linux File System - Explained
::2012/11/17::
Play Video
12
Google Glass Android processes and file system
Google Glass Android processes and file system
::2013/05/02::
Play Video
13
Help for hacked sites: File system damage assessment
Help for hacked sites: File system damage assessment
::2013/03/12::
Play Video
14
Wuala - a distributed file system
Wuala - a distributed file system
::2007/11/02::
Play Video
15
Seattle Conference on Scalability: Lustre File System
Seattle Conference on Scalability: Lustre File System
::2007/10/08::
Play Video
16
Lustre File System
Lustre File System
::2014/04/19::
Play Video
17
Google Android Training File System  7/18
Google Android Training File System 7/18
::2010/12/22::
Play Video
18
NYLUG Presents: Ted Ts
NYLUG Presents: Ted Ts'o on the ext4 Filesystem (Jan 10, 2013) (HD)
::2013/02/21::
Play Video
19
The Reiser4 Filesystem
The Reiser4 Filesystem
::2007/10/08::
Play Video
20
Sync File System API
Sync File System API
::2014/10/14::
Play Video
21
Basics of the Android File System w/ Data Extraction
Basics of the Android File System w/ Data Extraction
::2014/04/08::
Play Video
22
Chapter 6: GFS2 File System
Chapter 6: GFS2 File System
::2014/02/25::
Play Video
23
The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing
The XSEDE Global Federated File System (GFFS) - Breaking Down Barriers to Secure Resource Sharing
::2014/04/03::
Play Video
24
10-MCSE 70-270 & 70-290 (Distributed File System) By Eng-Ahmed Sarhan
10-MCSE 70-270 & 70-290 (Distributed File System) By Eng-Ahmed Sarhan
::2011/09/18::
Play Video
25
15  The Network File System NFS
15 The Network File System NFS
::2012/09/04::
Play Video
26
Tux3 Progress Report: Towards a New General Purpose Filesystem for Linux - Daniel Phillips, Samsung
Tux3 Progress Report: Towards a New General Purpose Filesystem for Linux - Daniel Phillips, Samsung
::2013/10/02::
Play Video
27
DOB | File Systems & Boot Process
DOB | File Systems & Boot Process
::2014/01/30::
Play Video
28
Galaxy S III Android File System Extraction Using UFED Touch
Galaxy S III Android File System Extraction Using UFED Touch
::2013/03/07::
Play Video
29
03  Hadoop Distributed File System HDFS
03 Hadoop Distributed File System HDFS
::2014/07/09::
Play Video
30
Lustre as a Root File System  / Speeding up Metadata
Lustre as a Root File System / Speeding up Metadata
::2011/04/24::
Play Video
31
Distributed File System
Distributed File System
::2012/11/22::
Play Video
32
How to create a ZFS file system on ubuntu with 3 drives raid 5 equiv
How to create a ZFS file system on ubuntu with 3 drives raid 5 equiv
::2011/01/22::
Play Video
33
Hadoop Tutorial 11 - Limitations of Network File System
Hadoop Tutorial 11 - Limitations of Network File System
::2013/04/19::
Play Video
34
GFS Global File System
GFS Global File System
::2009/09/07::
Play Video
35
Indexing file system with Constellio 1.1
Indexing file system with Constellio 1.1
::2010/12/19::
Play Video
36
Deploying a Lustre File System for the HPC Platform of the Research Area of the Bank of Italy
Deploying a Lustre File System for the HPC Platform of the Research Area of the Bank of Italy
::2012/04/24::
Play Video
37
Hadoop Tutorial 15 - Replication in Hadoop File System
Hadoop Tutorial 15 - Replication in Hadoop File System
::2013/04/23::
Play Video
38
Sequoia
Sequoia's 55PB Lustre+ZFS Filesystem
::2012/04/24::
Play Video
39
Distributed File System (DFS) - Introducción
Distributed File System (DFS) - Introducción
::2011/06/20::
Play Video
40
Steve Jobs File System at D3 in 2005
Steve Jobs File System at D3 in 2005
::2012/06/05::
Play Video
41
What is Google Chrome OS?
What is Google Chrome OS?
::2009/11/18::
Play Video
42
The Network File System NFS
The Network File System NFS
::2014/03/11::
Play Video
43
How To Restore File system PS3 easy way
How To Restore File system PS3 easy way
::2014/07/27::
Play Video
44
Panzura Delivers Global File System for Enterprise Cloud Storage
Panzura Delivers Global File System for Enterprise Cloud Storage
::2012/07/23::
Play Video
45
Lecture 24: AFS
Lecture 24: AFS
::2008/11/25::
Play Video
46
How-to install ubuntu 13.10 server + static ip + NFS (network file system) server  and client config
How-to install ubuntu 13.10 server + static ip + NFS (network file system) server and client config
::2013/12/21::
Play Video
47
Libgdx [Desktop/Android/iOS] Made Easy Tutorial 13 - File System
Libgdx [Desktop/Android/iOS] Made Easy Tutorial 13 - File System
::2014/05/17::
Play Video
48
Tutorial: Global Filesystem mit Panzura
Tutorial: Global Filesystem mit Panzura
::2013/03/26::
Play Video
49
Google I/O 2011: HTML5 Showcase for Web Developers: The Wow and the How
Google I/O 2011: HTML5 Showcase for Web Developers: The Wow and the How
::2011/05/11::
Play Video
50
Learning Linux - Episode Three - Linux Filesystem & GRUB
Learning Linux - Episode Three - Linux Filesystem & GRUB
::2013/11/10::
NEXT >>
RESULTS [51 .. 101]
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with GmailFS.
Google File System
Operating system Linux kernel
Type distributed file system
License proprietary

Google File System (GFS or GoogleFS) is a proprietary distributed file system developed by Google for its own use.[1] It is designed to provide efficient, reliable access to data using large clusters of commodity hardware. A new version of the Google File System is codenamed Colossus.[2]

Design[edit]

Google File System. Designed for system-to-system interaction, and not for user-to-system interaction. The chunk servers replicate the data automatically.

GFS is enhanced for Google's core data storage and usage needs (primarily the search engine), which can generate enormous amounts of data that needs to be retained;[1] Google File System grew out of an earlier Google effort, "BigFiles", developed by Larry Page and Sergey Brin in the early days of Google, while it was still located in Stanford.[1] Files are divided into fixed-size chunks of 64 megabytes,[1] similar to clusters or sectors in regular file systems, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on Google's computing clusters, dense nodes which consist of cheap, "commodity" computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high data throughputs, even when it comes at the cost of latency.

A GFS cluster consists of multiple nodes. These nodes are divided into two types: one Master node and a large number of Chunkservers. Each file is divided into fixed-size chunks. Chunkservers store these chunks. Each chunk is assigned a unique 64-bit label by the master node at the time of creation, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network, with the minimum being three, but even more for files that have high end-in demand or need more redundancy.

The Master server does not usually store the actual chunks, but rather all the metadata associated with the chunks, such as the tables mapping the 64-bit labels to chunk locations and the files they make up, the locations of the copies of the chunks, what processes are reading or writing to a particular chunk, or taking a "snapshot" of the chunk pursuant to replicate it (usually at the instigation of the Master server, when, due to node failures, the number of copies of a chunk has fallen beneath the set number). All this metadata is kept current by the Master server periodically receiving updates from each chunk server ("Heart-beat messages").

Permissions for modifications are handled by a system of time-limited, expiring "leases", where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to modify the chunk. The modifying chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. The changes are not saved until all chunkservers acknowledge, thus guaranteeing the completion and atomicity of the operation.

Programs access the chunks by first querying the Master server for the locations of the desired chunks; if the chunks are not being operated on (i.e. no outstanding leases exist), the Master replies with the locations, and the program then contacts and receives the data from the chunkserver directly (similar to Kazaa and its supernodes).

Unlike most other file systems, GFS is not implemented in the kernel of an operating system, but is instead provided as a userspace library.

Performance[edit]

Deciding from benchmarking results,[3] when used with relatively small number of servers (15), the file system achieves reading performance comparable to that of a single disk (80–100 MB/s), but has a reduced write performance (30 MB/s), and is relatively slow (5 MB/s) in appending data to existing files. (The authors present no results on random seek time.) As the master node is not directly involved in data reading (the data are passed from the chunk server directly to the reading client), the read rate increases significantly with the number of chunk servers, achieving 583 MB/s for 342 nodes. Aggregating a large number of servers also allows big capacity, while it is somewhat reduced by storing data in three independent locations (to provide redundancy).

See also[edit]

References[edit]

  1. ^ a b c d Carr 2006: ‘Despite having published details on technologies like the Google File System, Google has not released the software as open source and shows little interest in selling it. The only way it is available to another enterprise is in embedded form—if you buy a high-end version of the Google Search Appliance, one that is delivered as a rack of servers, you get Google's technology for managing that cluster as part of the package’
  2. ^ "Google's Colossus Makes Search Real-Time by Dumping MapReduce", High Scalability (World Wide Web log), 2010-09-11 .
  3. ^ Ghemawat, Gobioff & Leung 2003.

Bibliography[edit]

External links[edit]

Wikipedia content is licensed under the GFDL License
Powered by YouTube
LEGAL
  • Mashpedia © 2014