Scality, a world leader in object and cloud storage, today announced strong developer adoption of its open-source Scality S3 Server object storage across healthcare, scientific research, finance, media, manufacturing design and government use cases. Available under the Apache 2.0 license, Scality S3 Server is free to use and to embed into developer applications where high availability, resiliency, and scalability is paramount. Scality S3 server has garnered nearly 600,000 DockerHub pulls since it was introduced in June 2016 and has many companies using the solution in production, such as Polyconseil and Cityzen Data.
“Historically, we stored user data in a shared network file system, which did not work very well with containers, let alone with third party applications. Now, we exclusively use Scality S3 Server to store all that data,” said Christian Patry, System Engineer, BlueSolutions by Polyconseil. “We sought an S3-compatible storage solution, but since our volumetric needs are low, we didn’t want to invest in an overly complex system. We are big users of Docker in our production environment and implemented the Docker version of Scality S3 Server. It is efficient, secure with encryption and S3 authentication, and very easy to maintain.”
Polyconseil is an atypical consulting firm combining council and technical experts. Polyconseil was chosen by Bolloré group to create and manage the information systems of its car sharing services, originally Autolib in Paris, and now in ten other cities around the world.
“Cityzen Data provides the Warp 10 data management platform for collecting, storing and delivering value from time series data stemming from all kinds of sensors to help customers accelerate real world workflows,” said Mathias Herberts, co-founder and CTO at Cityzen Data. “We rely on Scality S3 Server object storage to provide the archival storage backend for these applications. It enables us to access massive amounts of packed sensor data from any cloud using Scality’s single interface.”
Cityzen Data provides a leading-edge platform for collecting, storing, analyzing, and deriving value from deep pools of historical sensor data within the energy, aeronautics, and industrial IoT markets. It is the preeminent data management platform that helps to quickly propel customers from sensors to services.
Scality S3 Server is an open source implementation of the Amazon S3 API, packaged in Docker containers to benefit from the power of the Docker ecosystem. Small enough to be used on a laptop for developers to test their application locally, it is robust enough to be deployed in production, and can be deployed in a HA (Highly Available) configuration with Docker Swarm. The Scality S3 Server API is the component used by Scality RING customers for Amazon S3 API compatibility, and it reliably stores billions of objects for mission critical applications.
For customers who require comprehensive enterprise capabilities, Scality RING extends the S3 Server with advanced enterprise features, including reinforced security, access controls, and complete scale-out ability, all with guaranteed uptime. Scality RING also supports essential single-sign-on enterprise protocols such as Microsoft Active Directory, and LDAP.
“Amazon S3 API has emerged as the de-facto standard to fully leverage the scalability, reliability, and cost effectiveness of object storage, but public cloud is not the answer to everything. Developers especially want to be able to code and test locally or to embed the API in their CI (Continuous Integration environment) to maximize their productivity. Some organizations want the power of object storage, but prefer the control of private cloud deployment,” said Giorgio Regni, CTO at Scality. “Scality S3 Server empowers developers to leverage the Amazon S3 API for free in a variety of situations.”
Scality, world leader in object and cloud storage, develops cost-effective Software Defined Storage: Scality RING, which serves over 500 million end-users worldwide with over 800 billion objects in production; and the open-source Scality S3 Server. Scality RING software deploys on any industry-standard x86 server, uniquely delivering performance, 100% availability and data durability, while integrating easily in the datacenter thanks to its native support for directory integration, traditional file applications and over 45 certified applications. Scality’s complete solutions excel at serving the specific storage needs of Global 2000 Enterprise, Media and Entertainment, Government and Cloud Provider customers while delivering up to 90% reduction in TCO versus legacy storage.
Making accurate predictions based on historical precedent is flawed, but thinking in scenarios reduces uncertainty. Most investment reports don’t publish formal risk assessments. Analysts typically provide investment recommendations in the form of a buy, hold or sell call, often derived from a fundamental analysis of the firm’s intrinsic value and its projected cash flows. However, existing research finds that while target prices do convey information, they also seem to be optimistic, inaccurate and of little long-run investment value.
There is a better way to present a fuller picture of future possibilities by putting multiple scenarios on the table, instead of limiting predictions to a single-point outcome. When forecasters are asked to explore the possible outcomes they otherwise would not have thought about, they are able to take more factors into account to allow for upsets to their base case scenario. This not only helps to allow for uncertainties, but reduces optimistic biases, improving overall forecasting.
Big data’s potential just keeps growing. Taking full advantage means companies must incorporate analytics into their strategic vision and use it to make better, faster decisions.
There is a transformational potential of big data. This potential has not been oversold. In fact, the convergence of several technology trends is accelerating progress. The volume of data continues to double every three years as information pours in from digital platforms, wireless sensors, virtual-reality applications, and billions of mobile phones. Data-storage capacity has increased, while its cost has plummeted. Data scientists now have unprecedented computing power at their disposal, and they are devising algorithms that are ever more sophisticated.
The greatest advances have occurred in location-based services and in US retail, both areas with competitors that are digital natives. In contrast, manufacturing, the EU public sector, and healthcare have captured less potential value. And new opportunities have arisen, further widening the gap between the leaders and laggards.
Leading companies are using their capabilities not only to improve their core operations but also to launch entirely new business models. The network effects of digital platforms are creating a winner-take-most situation in some markets. The leading firms have remarkably deep analytical talent taking on various problems—and they are actively looking for ways to enter other industries. These companies can take advantage of their scale and data insights to add new business lines, and those expansions are increasingly blurring traditional sector boundaries.
Where digital natives were built for analytics, legacy companies have to do the hard work of overhauling or changing existing systems. Adapting to an era of data-driven decision making is not always a simple proposition. Some companies have invested heavily in technology but have not yet changed their organizations so they can make the most of these investments. Many are struggling to develop the talent, business processes, and organizational muscle to capture real value from analytics.
The first challenge is incorporating data and analytics into a core strategic vision. The next step is developing the right business processes and building capabilities, including both data infrastructure and talent. It is not enough simply to layer powerful technology systems on top of existing business operations. All these aspects of transformation need to come together to realize the full potential of data and analytics. The challenges incumbents face in pulling this off are precisely why much of the value we highlighted in 2011 is still unclaimed.
The urgency for incumbents is growing, since leaders are staking out large advantages, and hesitating increases the risk of being disrupted. Disruption is already happening, and it takes multiple forms. Introducing new types of data sets (orthogonal data) can confer a competitive advantage, for instance, while massive integration capabilities can break through organizational silos, enabling new insights and models. Hyperscale digital platforms can match buyers and sellers in real time, transforming inefficient markets. Granular data can be used to personalize products and services—including, most intriguingly, healthcare. New analytical techniques can fuel discovery and innovation. Above all, businesses no longer have to go on gut instinct; they can use data and analytics to make faster decisions and more accurate forecasts supported by a mountain of evidence.
The next generation of tools could unleash even bigger changes. New machine-learning and deep-learning capabilities have an enormous variety of applications that stretch into many sectors of the economy. Systems enabled by machine learning can provide customer service, manage logistics, analyze medical records, or even write news stories.
These technologies could generate productivity gains and an improved quality of life, but they carry the risk of causing job losses and dislocations. Previous MGI research found that 45 percent of work activities could be automated using current technologies; some 80 percent of that is attributable to existing machine-learning capabilities. Breakthroughs in natural-language processing could expand that impact.
Data and analytics are already shaking up multiple industries, and the effects will only become more pronounced as adoption reaches critical mass—and as machines gain unprecedented capabilities to solve problems and understand language. Organizations that can harness these capabilities effectively will be able to create significant value and differentiate themselves, while others will find themselves increasingly at a disadvantage.
Combining pure prediction with causal inference will get us closer to being able to address the really hard problems that involve sussing out all of the alternate outcomes that could result from implementing different policies.
Many public-policy problems have questions of causal inference at their core. That’s the really hard stuff, and you have to proceed with caution to understand the effect of something. But that’s most of the world.
The momentum of big data and machine learning in academic research and practical applications is invigorating. The gap between research and practice, which used to be insurmountable, is disappearing. It’s so cool when our research gets adopted within months.
It’s especially gratifying to witness the widespread adoption of predictive methods that not too long ago were the exclusive province of a specialized cadre of data scientists. It’s amazing, because you’re empowering people who in a previous generation wouldn’t have used a computer for anything other than word processing. Now it’s not just the geeky engineers, but people at high levels of a company are interested in the most recent research. They recognize the power of being able to use data to optimize decisions and investments. They’re building big-data models and open-source software to make great predictions with cutting-edge techniques. It’s been completely democratized, and that’s a huge success story.