[ad_1]
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
SQream Technologies has created a relational database management system that uses graphics processing units (GPUs) to perform big data analytics by means of structured query language (SQL). SQream was founded in 2010 by CEO Ami Gal and CTO and VP of R&D Razi Shoshani and is headquartered in Tel Aviv, Israel. The company joined the Google Cloud Partner Advantage program as a build partner via its no-code ETL and analytics platform, Panoply.
By using the computational power of GPUs, SQream’s analytics platform can ingest, transform and query very large datasets on an hourly, daily or yearly basis. This platform enables SQream’s customers to get complex insights out of their very large datasets.
“What we are doing is enabling organizations to reduce the size of their local data center by using fewer servers,” Gal told EE Times. “With our software, the customer can use a couple of machines with a few GPUs each instead of a large number of machines and do the same job, achieving the same results.”
According to SQream, the analytics platform can ingest up to 1,000× more data than conventional data analytics systems, doing it 10× to 50× faster, at 10% of the cost. Additionally, this is done with 10% of carbon consumption, because if it had been done using other powerful techniques based on conventional CPUs as opposed to GPUs, it would have needed many more computing nodes and would have consumed more carbon for doing the same workload.
SQreamDB
SQream’s flagship product is SQreamDB, a SQL database that allows customers to execute complex analytics on a petabyte scale of data (up to 100 PB), gaining time-sensitive business insights faster and cheaper than from competitors’ solutions.
As shown in Figure 1, the analytics platform can be deployed in the following ways:
- Query engine: This step performs the analysis of data from any source (either internal or external) and in any format, on top of existing analytical and storage solutions. Data to be analyzed doesn’t need to be duplicated.
- Data preparation: Raw data is transformed through denormalization, pre-aggregation, feature generation, cleaning and BI processes. After that, it is ready to be processed by machine-learning, BI and AI algorithms.
- Data warehouse: In this step, data is stored and managed on an enterprise scale. Decision-makers, business analysts, data engineers and data scientists can analyze this data and gain valuable insights from BI, SQL clients and other analytics apps.
Due to its modest hardware requirements and use of compression, SQream addresses the petabyte-scale analytics market, helping companies to save money and reduce carbon emissions. SQream did a benchmark with the help of the GreenBook guide statistics and found out that running standard analytics on 300 terabytes of data saved 90% of carbon emissions.
By taking advantage of the computational power and parallelism offered by GPUs, the software enables SQream to use much fewer resources in the data center to view and analyze the data.
“Instead of having six racks of servers, we can use only two servers to do the same job, and this allows our customers to save resources on the cloud,” Gal said.
According to SQream, there are quite a few semiconductor manufacturing companies that have several IoT sensors in production. In general, the IoT is a use case that creates a lot of data and, consequently, a lot of derived analytics at scale.
Another factor that contributes to creating massive datasets is the fact that a lot of data analytics run in data centers use machine-learning algorithms: To achieve a high level of accuracy, these algorithms have to be run on big datasets. For running the algorithms on much bigger datasets, you need more storage, more computational power, more networking and more analytics.
“The more data you give machine-learning algorithms, the more accurate they are and the more satisfied the customer becomes,” Gal said. “We’re seeing how manufacturing, telecoms, banking, insurance, financial, healthcare and IoT companies are creating huge datasets that require a large data center. We can help in any of those use cases.”
In data analytics, a crucial factor is scalability. SQream is always working on the platform architecture to make sure it will always be scalable for bigger datasets. That involves being continuously updated on future designs of policy bottlenecks, computing, processors, networking, storage and memory.
Another aspect the company is also looking into is to enable the whole product as a service. To achieve that, SQream is working together with the big cloud providers.
According to Gal, the customer often does not care about what needs to be done behind the scenes (such as required computers, networking, storage and memory) to enable the workloads. As a result, we might be in a situation where a lot of energy consumption, cooling consumption and carbon consumption are created. That’s an extremely inefficient process.
“By releasing the same software, but as a service, the customer will continue with his mindset of not caring how the process is performed behind the scenes, and we will make the process efficient for him under the hood of the cloud platform,” Gal said.
Millions of computers are added every year to the cloud platforms. This trend is growing exponentially, and companies are not going to stop doing analytics.
“I think one of the things we need to do as people solving architectural and computer problems for the customers is to make sure the architecture we offer them is efficient, robust, cost-effective and scalable,” Gal said.
[ad_2]