Google brings Big Data to the masses with Cloud Dataflow beta
Google has finally taken the wraps off its Cloud Dataflow offering, which is designed to allow developers lacking in Hadoop skills to build sophisticated analytic “pipelines” capable of processing extremely large datasets.
Cloud Dataflow can ingest, transform, normalize, and analyze huge amounts of data, well into the Exabyte range. “Just write a program, submit it and Cloud Dataflow will do the rest. No clusters to manage, Cloud Dataflow will start the needed resources, auto scale them (within the bounds you choose) and terminate them as soon as the work is done.
”Dataflow relies on Google’s Compute Engine cloud service to provide the raw computing power,while Google Cloud Storage and BigQuery are employed to store and access the data.
Besides the Dataflow news, Google simultaneously announced an update to its BigQueryService, which provides a Structured Query Language (SQL) interface to help developers delve into large sets of unstructured data. SQL is one of the most common programing languages, used by almost all traditional relational databases, which means it’s well understood by the vast majority of database managers.
Source: siliconangle.com

