Self-Driven, Automated, Accelerated, and Optimized Data

Datera has written several blogs, comparing, and contrasting the revolution that self-driving/autonomous/electric cars have wrought in the auto space, to the advantages we now see using autonomic storage systems. Engineers in the Silicon Valley often own a Tesla, and being fans of operational excellence, they quickly realize that NOT having to make manual driving adjustments to get to work, makes the overall aspects of driving, faster and more efficient. They can therefore focus on other critical details as they are conveyed to the office (back when we all met at an office!).

Our customers, the Global 2000, are using software defined storage (SDS), hyper-converged infrastructure (HCI), and dis-aggregated HCI (dHCI) to help their administrators, procurement teams and IT staff in general, get more done, with less hours spent, manually tuning complex IT structures. As a bonus, by removing human error, performing repetitive tasks is perfect for the types of AI and machine learning intelligence Datera has had from inception.

In this blog, we want to walk you through a few use-case examples, where our open APIs, advanced automation practices, and intelligent storage and L3 networking have converged to allow our customers to deploy, maintain and quickly scale workloads, applications, and new environments.

As with Self-Driving Cars, Automated, Policy Driven, Containerization is a Storage Game Changer

A handful of companies have been preparing for this eventuality. Google, for example, uses tens of millions of servers in its operations. To dynamically compose them for each application, Google created its own software, known as “Borg.” Borg has spawned another computing management innovation: its open-source variant “Kubernetes,” which is rapidly democratizing Google-style cloud computing.

Kubernetes allows the disaggregation of applications into scalable microservices, which it dynamically maps onto data center resources based on their respective requirements and capabilities. This way, compute can be composed and delivered as a “swarm” of services, resulting in game-changing operational flexibility, utilization, and economics.

Datera is to Data as Kubernetes is to Compute

Datera provides sophisticated data orchestration to complement Kubernetes’ compute orchestration. By leveraging rich machine intelligence, the Datera data services platform continuously reshapes itself to optimize performance, resilience, and cost. Very easily add heterogeneous media, servers, and make changes ON THE FLY, to compose your perfect system. No more monolithic, legacy architecture, no more vendor lock in!

Never migrate again, add capacity, performance, resilience and reduce recovery time simply by adding/changing server nodes.

Datera is the only data services platform that is driven by application service level objectives (“SLOs”), which makes it the ideal self-driving data foundation for any cloud infrastructure:

  • NoOps: Application SLOs automatically drive the Datera data infrastructure instead of humans, thus providing the foundation for 24×7 self-driving infrastructure.
  • Dynamic resource shaping: The Datera platform enables all its elements (data, exports, services, etc.) to float across it, so that SLOs can constantly reshape it, ultimately spanning private and public clouds.
  • Native I/O performance: Datera built its own low latency log-structured data store with a lockless distributed coherence protocol to match the performance of high-end enterprise arrays, and thus allows replacing them with a modern, cloud-age data services platform.
  • Future-proof extensibility: Datera supports live-insertion of new technologies, which offers a game-changing price/performance band that can accommodate a wide spectrum of applications and eliminates data migration forever.
  • Role-based multi-tenancy: The Datera platform uses adaptive micro-segmentation to deliver security, resilience and network virtualization as a service.

Datera = Rack-Scale Infrastructure

Resource orchestration platforms, such as Kubernetes and Datera, align well with highly configurable hardware architectures, such as rack-scale, which allows independent pooling and scaling of resources to compose application-defined, tailored data center services.

Some of the key tenets of rack-scale computing are:

  • Commodity hardware: Commodity servers provide disaggregated resources that can independently be pooled and scaled, offering wide price/performance flexibility.
  • High-speed fabrics: Fast networks significantly reduce communication overhead and latency between the disaggregated resources and make them practical.
  • Composability: Intelligent software dynamically composes the disaggregated resources into a cohesive system.
  • API-driven: A single, global REST API makes the resources easy to provision, consume, move and manage.
  • NoOps: The resources are consumed automatically, driven by applications rather than operating systems, hypervisors or humans.

Flat Networks Allow Effective Resource Pooling. We will discuss L3 Networking and Datera in an upcoming blog. Many of the largest enterprises in the world are deploying Datera software defined storage, on highly virutalized L3 networks. Further automating their entire data center!

Driving it All Home…

As the world continues to accelerate toward self-driving cars and billions of connected compute devices, the ability to keep up with their demands requires data centers to become just as intelligent, autonomous and agile as they are.

Self-driving infrastructure continuously recomposes itself to adapt to fluctuating application demands, with fully autonomic operations, driven by applications – not by humans.

Rack-scale architecture aligns well with self-driving infrastructure, as it offers independently composable resource pools. Flat networks reduce latency to make resource pooling effective, and virtual L3 networks let applications and the data center network seamlessly co-adapt.

Datera uniquely blends all these elements seamlessly together for data, creating an operations-free data services platform that transforms infrastructure and obsoletes storage as we know it.

Just as machine intelligence is increasingly enriching everyday human experiences, self-driving infrastructure is transforming the way digital technology is developed, deployed, and delivered.

Datera’s self-driving data infrastructure delivers game-changing speed, scale, agility, and economics – to always navigate the best road ahead to any destination.

For more general Datera information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

For detailed discussions around Containers, Kubernetes, or automation, please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Datera Storage: Automating Storage Fleet Management @Scale with Ansible

Managing storage configurations manually at scale is quite tedious, slow and prone to user error. As enterprises move towards adopting newer data-center technologies, the automation of such configuration tasks becomes essential. Software defined storage such as Datera performs both the tasks of storage node lifecycle management and storage configuration management, hence the need to automate the configuration management across such clusters to easily provision, deploy and manage the storage nodes.

What is Ansible?

Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intra-service orchestration, and provisioning.

The Ansible system consists of:

  • CONTROLLER : One or few servers installed with Ansible packages and controller configuration.
  • CLIENTS : The nodes on which the IT provisioning are being performed upon.

The Ansible framework consists of:

  • MODULES : These programs are written to be resource models of the desired state of the system. They perform individual IT automation tasks and are typically published by different OS, storage vendors etc.
  • PLAYBOOK: Playbooks contain the steps which the user wants to execute on a particular machine. Playbooks are YAML file describing series of tasks to perform for datacenter provisioning workflows. The tasks defined in a playbook are executed sequentially.

Datera Ansible Module Architecture

The Datera Ansible package consists of Datera Ansible modules and the Datera python SDK, which provide interfaces for various REST API calls to the Datera Storage Cluster such as system settings, provision storage, set up failure domains, cloud backup management, IP Pool management etc. Datera has been actively developing these python modules to help in performing the aforementioned tasks.

The Ansible Clients are customer specific application nodes, that require ISCSI/NVMEoF interfaces to connect to the Datera clusters for the data-path. Once password-less ssh is enabled on the clients, the ansible playbook can be used to discover and login to the storage target/s on the Datera storage cluster.

Datera Module Task Design

  1. Task Module: Runs the task and sets the required parameters to be passed as a hook to the Datera task interface function. Returns the response for the task as Pass or Fail and if the state was changed or unchanged.
  2. Datera Task Interface Module: The Datera task interface module uses the Datera python SDK to perform the requested storage configuration task, performs Error handling and returns the results of the task to the Datera Task Module.

The Datera Ansible Modules support the following features to help with the automation of configuration management of Datera Storage Clusters:

  1. System Settings : Helps configure the system settings for a Datera storage cluster to configure the NTP server, the DNS server, IP Pools for the cluster and Cluster-wide data efficiency settings.
  2. Node Lifecyle Management: Adding new storage nodes to the cluster, temporary offline of the nodes from the cluster & removal of storage nodes from the system.
  3. Storage Provisioning : Creation of AppInstance, create/extend volumes and target assignment for the consistency groups.
  4. Cloud Backup Management: Create and manage the remote cloud backup end-points.

Ansible Support Solution Benefits:

    • Multiple Datera clusters can be managed by a single playbook on an Ansible controller using the Universal Datera Configuration details.
    • Datera Ansible modules help configure the Datera storage clusters directly, as opposed to using a intermediary proxy module to hook into the Datera python SDK for REST API calls.
    • Reduce inconsistencies , TCO and ease of configuration by reducing the time to deploy and manage software defined storage systems.

For more general Datera information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

For detailed discussions around Ansible, or automation, please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

*Special thanks to Samarth Kowdle, for his development contributions on our Ansible Modules!

Datera Snapshot and Clones Bring Traditional Data Protection Services to a More Flexible Software Defined Platform

Storage integrated snapshots and clones have become standard features in many enterprise storage products. Snapshots can be the foundation of a data protection strategy, providing a point-in-time copy of data that can be used for instant recovery. Clones are similar to snapshots in that they can be created instantaneously and represent a virtual copy of data. Whereas data in a snapshot is immutable, data in a clone can be changed and can be used for operations like data analytics or test/dev. Datera relies on a unique combination of technologies to provide snapshots and clones that are high performance, efficient, resilient and provide flexible management.

Software Defined Storage has matured, and now offers the full range of services traditional storage arrays have offered for two decades. Datera customers enjoy deduplication, compression, encryption and fully integrated snapshot and clone capabilities. These services are even more powerful on a platform with autonomic, intent and policy driven control of workloads and volumes. Datera = SDS flexibility + full enterprise storage services + industry leading automation.

The Datera Storage Platform (DSP) architecture is designed for scale-out and efficiently distributes I/O across storage nodes for resiliency and performance. The system uses a multi-replica copy technique to ensure maximum data resiliency. All data and metadata associated with a snapshot is replicated across nodes like the source data to ensure the same level of resiliency and consistently. Redirect on write (RoW) snapshot implementation reduces storage system resource impact by minimizing the number of writes required when using snapshots. The datastore is implemented using augmented B+ trees for its meta-data. New data written after a snapshot is created is written to a new block. Previously written data in a snapshot does not have to be read and rewritten to storage like some implementations.

Snapshots for Data Protection

Snapshots are an essential component of a complete data protection solution for mission critical applications that have the most demanding service level requirements. This is especially true with large data sets that would take a long time to backup with traditional methods that move a lot of data and have a negative impact on system performance. Taking snapshots does not require data to be moved. Snapshot operations have minimal impact on the system so these snapshot recovery points can be created much more frequently than a traditional host file system based backup. More frequent snapshots provide a lower Recovery Point Objective (RPO) which means that the potential for data loss is much less. The other advantage of snapshots is that the Recovery Time Objective (RTO) process is also much lower, and that minimizes application down time.

Clones

The technical implementation of clones are very similar to snapshots, using the same B+ trees and space efficient pointer mechanisms. The major difference is that clones are exported as new read/write volumes. Writes to a clone result in the meta-data for the original volume and the clone to diverge. Another difference is that snapshots belong to the same volume and Application Instance as the source, whereas clones are created as a different volume in a new Application Instance and Storage Instance. One important advantage of Datera clones over most other implementations is that the clones are completely independent of the source volume. Another unique advantage is the ability to change the media or placement policy for the clone to provide different performance or cost characteristics from the parent volume.

Copy Data Management (CDM)

The idea of Copy Data Management (CDM) involves taking a holistic approach to re-using copies of data (virtual copies) for multiple use cases such as:

  • Disaster recovery
  • DR test
  • Operational recovery (backup)
  • Data analytics
  • Test/dev

CDM leverages Datera embedded technologies like snapshots and clones The user points to source data and specifies “virtual copies” of data. DSP supports clones that can be used to quickly make space efficient copies of data for test/dev, analytics and other copy data management workflows. For example, a single remote copy of data can be used for site disaster recovery, DR test and even operational recovery if the recovery points (snapshots) are retained long enough.

Cloud Backup (Datera2Object)

Datera2Object replication is a solution for backing up and restoring Datera primary snapshot data to and from a remote public cloud or on-prem object store.  The solution uses replication to create copies of Application Instances and volumes in remote object storage solutions such as Datera system with S3 object services for backup. Backups can also be replicated to Amazon Web Services (AWS) S3, the Google Cloud Platform (GCP) and generic S3 Object Storage systems. The replication takes place directly between the Datera source system and an object storage system, making the process very efficient with low resource impact.

 

Datera2Object supports these primary use cases:

  • Creating space efficient backups of Datera Application Instances and volumes in a remote object store
  • Restoring Datera Application Instances and volumes from a remote object store to either the source system or and alternate Datera system
  • Migrating Datera Application Instances and volumes to remote object store with the ability to migrate back to the same or another Datera system

Ecosystem Support

Datera plugin for vCenter also enables snapshot data to be cloned and mounted to other ESX hosts to enable granular recovery or support test/dev workflows. Again, the plug-in exposes the power of storage clones to VI admins with a friendly user interface to automate workflows.

Datera vCenter Plug-in for VMware Snapshot List

The Datera Cinder driver provides seamless integration with OpenStack and several advanced functionalities to enable a seamless operator experience. The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Datera’s powerful storage classes, and policy driven workloads are a natural fit with Kubernetes and integrate with native snapshots for creating recovery points and doing instantaneous restores. The Datera CloudStack storage plug-in provides Datera storage plugin support for CloudStack Primary storage.

The Bottom Line

The evolution of software defined is indeed quite mature today, and continuing to mature. Software Defined Storage is delivering all the performance, capabilities, resiliency and features of decades old enterprise storage arrays architectures, with the modern simplicity, elasticity and automation that IT professionals need to be able able to support businesses that are operating in a very dynamic environment where agility, efficiency and economics can deliver a significant positive impact to the outcome.

Please click here to read a much more detailed and technical Solution Brief on Datera Snapshots and Clones.

For more general Datera information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Datera Rapidly & Seamlessly Scales Our Analytics Platform With Kubernetes

We have been focused on Datera’s “Secret Sauce’s” the past few posts, this week we are taking a short break to explain how we use Kubernetes to power our customer analytics portal. Our own “Datera + Kubernetes” case study! Next week, we will wrap up our “Secret Sauce” series as we explore Datera’s policy driven, open API automation and scripting extensibility.

Datera and Kubernetes were made for each other, and given the early inclusion of K8’s into the Datera ecosystem, we use it ourselves! This blog is a quick intro that describes how we upgraded and enhanced our first cloud based analytics platform, to use Kubernetes to help us scale our performance and capacity issues, as we have recently added a large number of new customers.

Hosting persistent storage (PVs and PVCs) is what Datera was designed for, and it seemed natural this year to migrate our analytics package (internally, we call this platform “Insights.”), to be powered by Kubernetes, to enable rapid scaling and seamlessly handle our larger volume of customers.

Our Case Study describes in some technical detail how we deployed 13 Kubernetes nodes, and used the flexibility of the Datera data storage system to combine just the right combination of SATA/SAS flash nodes, with NVMe and Optane likely in the future. The ability to mix and match nodes for performance, capacity or cost considerations, is one of the unique aspects that is helping the Global 1000 shift from legacy SAN and fiber channel systems, into Datera and high-performance Ethernet.

Datera is a fully dis-aggregated scale-out storage platform, that runs over multiple standard protocols (iSCSI, Object/S3), combining both heterogeneous compute platform/framework flexibility (HPE, Dell, Fujitsu, Intel, and others) with rapid deployment velocity and access to data from anywhere. Datera’s Container Storage Interface (CSI) driver deeply integrates with the Kubernetes runtime. It allows deploying an entire stateful multi-site environment.

Please click here to read much more about how Datera leveraged the power of Kubernetes, on Datera storage…

Click here to read our CSI Deployment Guide.

For more general Datera information, we recommend reading our white papers:

Built for Performance
Built for Constant Change
Built for Autonomous Operations
Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

Secret Sauce Part 2: Managing a Scalable Data Infrastructure by “Intent” for a Dynamic Data Center

In a prior blog, we provided insight into Datera’s patented Lock-Less Coherency Protocol and how it enables customers the ability to architect a heterogeneous data infrastructure with different generations of servers and brands as well as different classes of media; all managed seamlessly and transparently to users.

Why does that matter? Because as time passes, and customers scale their infrastructure, they need to deal with changes in technology and requirements. Datera software allows them to absorb all these changes seamlessly and dynamically.

In this blog we will provide insight into another Datera innovation; easily managing a heterogeneous data infrastructure by Intent. This is critical as our customers scale the infrastructure over time and need to manage constant change, whether that be in technology or business requirements.

As a refresher, below is that high level view of the Datera architecture: a hybrid cloud, software-defined platform implemented for both block and object data enabling a data infrastructure that is heterogeneous, elastic, dynamic, and autonomous.

The Datera architecture has three distinct layers :

  • A heterogeneous software defined storage layer for block and object data
  • A data management layer to make the infrastructure dynamic
  • A programmable layer to make the infrastructure autonomous and easily programmable/extensible

In this blog I want to take you back under the Datera hood and share another of the core technologies that enables Datera’s architectural differences, distinct from any other company in the space.

Architected for Managing Constant Change

Since the 1960’s, storage products were built for performance and resiliency and nowhere is this more prominent than in products targeting enterprise storage.  Datera acknowledged those imperatives but felt it was necessary to include one more; the ability to adapt to change.  Performance and resiliency remain important, but Datera was built to reflect the modern enterprise, a place where being agile in response to opportunity or threat are of equal importance.

Datera is based on a scale-out shared-nothing distributed system architecture. Many modern applications are based on this approach, including many first-generation software-defined storage products.  While scale-out, shared-nothing architectures provide an important backbone, they are not enough.  The Datera architects recognized the need to enable enterprises to provide developer velocity and operational agility without being forced to utilize public cloud service providers.  With this in mind, the architects made several crucial choices in architecting the Datera software that have resulted in Enterprise class storage capabilities, while uniquely fulfilling the agility / velocity promise of software-defined storage.

Policy Driven, Managing by Intent

Datera provides a volume service akin to Amazon Web Services Elastic Block Storage, but with a key distinction – the preferred method of provisioning storage for an application is via the use of application templates that specify the storage needs and policies for the application. The templates describe the application’s data volumes, their access protocols, resiliency profiles, security settings, efficiency, fault domains, tenancy, performance, data reduction, and data protection / replication schedules.  In the  Datera system, this is referred to as an Application Instance and codifies the intent of the users without requiring them to be experts in data storage.

Inside the system, the templates map to policies driving service-level objectives (SLO’s). SLO’s are part of a closed loop, feedback system of reading / writing data, gathering telemetry, comparing to the SLO’s and then adjusting the placement and management of the data to meet the application SLO’s. The administrator needs to specify the applications intent via the templates, and the system will automatically adjust to meet the intent. The most compelling part of managing by intent is that the policies can be changed at any time. As part of the closed loop system, a change to the SLO’s will be detected, and any needed changes will be initiated and managed transparently / non-disruptively to the application. Unlike previous enterprise storage systems, the ability to change the policies is not a simple management veneer over a rigid architecture.  Datera software was built from the ground up to support the dynamic nature of modern data centers.

The One, Two Punch!

In summary:

The combination of innovations like lock-less coherency and management by intent, is the one two punch that enables enterprises the ability to achieve the highest operational agility, velocity, efficiency and best economics. Without sacrificing any performance and availability, compared with traditional enterprise arrays.

  1. The need for a heterogeneous data infrastructure — Lock-less coherency enables customers to build a seamless heterogeneous infrastructure with different classes/brands of servers and media.
    • This is an inevitable need as time passes and customers need to scale the infrastructure, deal with new requirements and technological changes.
  2. Managing by Intent for ease, velocity and autonomous adaptability — The data management layer is where the magic occurs, radically simplifying management, and enabling customers to easily adopt whichever media technology is best suited to meet the need of their applications.
    • If the intent is for very low latency, then the application will be directed to an NVMe or Optane server, if the intent is for the data for the application to be cost effective, the data will be directed to a SATA/SAN server. Totally automatic and transparent.
    • What if the data that was on the high performance and more expensive nodes can now be stored on less expensive media? This is the magic! Change the intent and the data will automatically be moved, live to the lower cost servers!

This is just another innovation that Datera has developed to deliver on broader outcomes that customers are looking for as they architect for the next generation data infrastructure. Stay tuned and will provide more insight on other critical technologies that make Datera a truly unique architecture…

For more information, we recommend reading our white papers:

Built for Performance
Built for Constant Change
Built for Autonomous Operations
Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!

 

Secret Sauce Part 1: Lock-less Coherency, Enabling True Heterogeneous Data Infrastructure

In a prior blog, we shared the high level view of the Datera architecture and the innovation that enables customers the opportunity to radically reduce complexity and costs, while significantly improving operational agility and efficiency.

The end result is a hybrid cloud, software-defined platform implemented for both block and object data enabling a data infrastructure that is heterogeneous, elastic,  dynamic, and autonomous.

The Datera architecture has three distinct layers :

  • A heterogeneous software defined storage layer for block and object data
  • A data management layer to make the infrastructure Dynamic
  • A programmable layer to make the infrastructure Autonomous and easily programmable/extensible

In this blog I want to take you under the Datera hood, and share one of the core technologies that enables Datera’s architectural differences, distinct from any other company in the space. And that is Datera’s patented Lock-less coherence technology…

Customers select software defined scale out solutions for few basic reasons:

  1. They want rapid technology adoption and automation which they cannot achieve with rigid, hard-to-manage enterprise-class arrays that lock them in with a fixed architecture.
  2. They want agility; being able to scale performance and capacity linearly to avoid complex guessing as to what they will need.
  3. They want freedom from hardware lock-ins and better economics with the ability to leverage industry standard servers and media — thus allowing the flexibility to adopt whichever media makes the most sense for their applications

While at a macro level, most software defined solutions may claim the ability to deliver on these requirements, many struggle in the details.  This is where architecture matters. Asking yourself a few basic questions will provide transparency and shed light on the outcomes you require to modernize your data center:

  1. How is the solution able to scale performance linearly as new nodes are added? Or asked a different way, will adding servers have an incremental or diminishing impact on the overall performance?
  2. Can the solution deliver high performance AND data efficiency- with compression and deduplication at the same time? Or will I need to compromise on either dimension?
  3. Can the solution seamlessly handle multiple types, generations, or brands of servers seamlessly? Or will I be locked in; forced to use the same class of server going forward as when I first deployed the platform? Next, if I can add different servers, will that be easy to do?
  4. Can the solution seamlessly and easily handle new types of media that become available over time? Or will the solution need to use the same type of media over the life of the platform? And if I can use different media, how easy will that be?
  5. Does the solution have an architecture that helps increase media endurance? Or will applications that are write-intensive reduce media life?

The Power and Innovation Of Lock-less coherency

We can confidently say– and demonstrate– that the Datera platform can answer yes to all of these questions! And that is accomplished through the Datera Lock-less Coherency Protocol, which is the secret sauce of the Datera platform.

Datera has a shared nothing architecture, meaning there is no centralized master node that stores all of the cluster metadata. That said, it is imperative for a distributed storage system like Datera to support changes initiated across multiple access points of the cluster.

Most scale-out storage systems use some form of Distributed Lock Management (DLM) in order to ensure data coherency across multiple access points of the cluster. The use of DLM requires that all the nodes in a cluster communicate with each other. As the size of the cluster increases, the amount of intra-node communication would adversely affect the foreground I/O bandwidth, impacting performance.

Together, with Datera’s shared nothing architecture, a time-based Lock-less Coherency Protocol is employed which ensures writes are synchronized across participating nodes without the need to increase intra-node communication. Thus the cluster IOPS and throughput scales linearly as the number of nodes added in the cluster increases.

The Lock-less Coherency Protocol provides correctness by ensuring that out of order writes are executed in time-order to the underlying storage media.▪  “Lock-less Coherency” – two-stage write process and distributed metadata maps (current and future maps) enables changes made from any node to be synchronized with other nodes without using distributed lock management.

Locks add repair complexity and latency.  The advantages of using a time-based, Lock-less Coherency Protocol involve the following:

  1. No locks required when repairing to avoid a new overwrite losing to a repair write.
  2. No locks required even if writes from one client follow different paths.
  3. No locks required even if writes from different clients follow different paths.

The multi-benefits provided by using Lock-less Coherency Protocol mechanism:

  • All acknowledgements for the host application are done out of non-volatile memory(NV-DIMM, NVRAM, 3DXPoint). The write latency for the host applications are not affected by the underlying storage media types, and are able to observe consistent, predictable latency.
  • Allows for the ability to perform log-structured writes from NV-DIMM , down to the underlying flash media. This leads to extending performance and endurance of underlying flash media, as large block writes are written to the media instead of a bunch of small block writes.
  • All of the data services for data efficiency purposes, such as Deduplication, Compression, & Encryption are done behind the ACKs sent to the host application. In typical systems where you have to do in-line deduplication there is a noticeable latency at the host application, and that is not the case with the Lock-less Coherency Protocol.

The outcome of the Lock-less Coherency technology enables Datera to deliver a solution that provides not only scalable performance with data efficiency functionality, but also provides the highest flexibility and heterogeneity in terms of servers and media customers can adopt on the fly over time.

And this is just one of the secret sauces that Datera has developed to deliver on broader outcomes that customers are looking for as they architect for the next generation data infrastructure. Stay tuned and will provide more insight on critical technologies that make Datera a truly unique architecture…

For more information, we recommend reading our white papers:

Built for Performance

Built for Constant Change

Built for Autonomous Operations

Built for Continuous Availability

We can schedule a demo at any time. Please reach us at sales@datera.io and share any specific capability you would like to learn more about. We look forward to the opportunity!