QueryIO is a Hadoop-based big data and analytics solution.
QueryIO is used to store and analyze vast amounts of structured and unstructured data.
QueryIO is especially well suited to enable users to process unstructured data, give it a structure and support analysis of this data using standard SQL.
QueryIO enables you to leverage the vast and mature infrastructure built around SQL and relational databases and utilize it for your Big Data Analytics needs.
QueryIO builds on Hadoop's scalability and reliability.
QueryIO makes it easy to scale to hundreds of Compute+Store cluster nodes and petabytes of data in an easy and logical manner.
QueryIO enhances basic Hadoop to provide a comprehensive Big Data Analytics solution. Major QueryIO components:
QueryIO Server UI
Hadoop - Distributed Storage and Computing
QueryIO is built on the Hadoop distributed computing platform.
Hadoop is a robust and scalable platform used by major companies such as Yahoo, Facebook and eBay.
Hadoop uses clusters of commodity Compute+Storage machines to provide virtually linear scalability.
Data Tagging and MetaData Extension services
QueryIO provides a framework to build a structure around unstructured Big Data.
Standard SQL-based Big Data querying, analysis and visualization
Once the structure is assembled, you can query your Big Data using standard SQL syntax and tools. Leverage tools and skills your organization already has to query, analyze and visualize your Big Data.
Cluster Setup and Management
Manage your Hadoop cluster via a Web-based interface. Define hosts, services, perform maintenance, etc. with click of few buttons.
HA, Monitoring, Tuning
Experience peace of mind and confidence with our comprehensive monitoring at every level (disk, machine, services, JVM, etc).
Drive MapReduce jobs via Hive SQL (HiveQL)
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems.
Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL.
At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
Data Integration services
Import/export data from Amazon S3, HDFS, SFTP source, any mail server, SSH, any type of database, HTTP/HTTPS, any FTP source or from your local file system.
Leverage our value-added services such as data encryption and compression on per file or system-wide basis.
Use user-based authentication or Kerberos based authentication.
QueryIO provides a comprehensive set of client interface to integrate seamlessly with your infrastructure.
Use our S3-compatible REST API to port your application from S3 to your QueryIO storage. Or use standard REST API to integrate with QueryIO.
Data Analytics - Spreadsheets, Charts, Grouping, Aggregations, UDFs, Formatting
Features of QueryIO
Standard SQL interface for big data query - like data querying and analysis
QueryIO provides a framework which allows you to perform standard SQL queries on your structured and unstructured Big Data.
It also provides an easy to use interface through which you can generate SQL queries and design reports to present your processed data.
Data Tagging and MetaData extension
QueryIO provides advanced manual and automated data tagging feature which allows you to define properties for files as they are being written to HDFS.
QueryIO automatically stores the basic MetaData files stored in HDFS.
QueryIO further extends the MetaData layer by enabling you to define additional MetaData.
QueryIO understands dozens of file formats such as pdf/xls/doc file formats, image files, audio and video files, etc.
Web-based cluster management
QueryIO provides an easy-to-use Web-based user interface for Hadoop cluster management.
Comprehensive monitoring, including rules, alerts, etc.
QueryIO management features make it easy to perform operations like applying rules, alerts, balancer, health checks etc. on the nodes running in your cluster.
Value added services such as S3-compatible REST API, FTP interface, etc.
QueryIO support various interfaces to store and process your Big Data.
Import and export from/to Amazon S3 with the click of a button.
Import/export from an FTP server or a local file system.
Encrypt and compress your data on a per file or global file system basis. Use your QueryIO store as an Amazon S3 end-point using our Amazon S3 compatible REST API.
Web based query design and spreadsheet analytics
Copyright © 2017 QueryIO Corporation. All Rights Reserved.
QueryIO, "Big Data Intelligence" and the QueryIO Logo are trademarks
of QueryIO Corporation. Apache, Hadoop and HDFS are trademarks of The Apache Software Foundation.