Skip to content

Releases: openebs/openebs

0.9

24 May 08:29
Compare
Choose a tag to compare
0.9 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.9.0.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

OpenEBS Release 0.9 has multiple enhancements and bug fixes which include:

  • Support for Dynamic Provisioning of Local PV (using hostpath)
  • Support for Backup/Restore of cStor Volumes using Velero OpenEBS Plugin
  • Support for distributing/scheduling of the cStor Volume Replicas efficiently for Statefulsets

For detailed change summary and steps to upgrade from previous version, please refer to: Release 0.9 Change Summary

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

0.9.0-RC2

16 May 16:14
Compare
Choose a tag to compare
0.9.0-RC2 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.12+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters in NDM Config Map to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately. Refer Configuring OpenEBS with selinux=on

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.9.0-RC2.yaml

For change summary refer to: Release 0.9 Change Summary

0.8.2

15 Apr 17:39
Compare
Choose a tag to compare
0.8.2 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.2.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

This is a patch release with limited scope.

Major Bugs Fixed

  • Fixed an issue where a newly added cStor Volume Replica may not be successfully registered with the cStor target, if the cStor target tries to connect to Replica before the Replica is completely initialized. (mayadata-io/cstor#214)
  • Fixed an issue with Jiva Volumes where target can mark the Replica as Timed out on IO, even when the Replica might actually be processing the Sync IO. Fixed by increasing the timeout check to be greater than the timeout set on the Sync IO. (openebs-archive/jiva#194)
  • Fixed an issue with Jiva Volumes that would not allow for Replicas to re-connect with the Target, if the initial Registration failed to successfully process the hand-shake request. (openebs-archive/jiva#195)
  • Fixed an issue with Jiva Volumes that would cause Target to restart when a send diagnostic command was received from the client. (openebs-archive/jiva#197). Many thanks to @rgembalik for help with debugging the issue and validating the RC build.
  • Fixed an issue causing PVC to be stuck in pending state, when there were more than one PVCs associated with an Application Pod. (openebs-archive/maya#1045)
  • Fixed an issue causing cStor Volume Replica CRs to be stuck, when the OpenEBS namespace was being deleted. (openebs-archive/maya#955)

New Capabilities

  • Support for new Storage Policies:
    • Pool Tolerations ( Applicable to cStor Pools) (openebs-archive/maya#1007).
      Pool Tolerations policy can be used to allow scheduling of cStor Pool Pods on nodes with taints. The Tolerations can be specified in the cStor SPC as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes.
        name: cstor-sparse-pool
        annotations:
          cas.openebs.io/config: |
            - name: Tolerations
              value: |-
                t1:
                  effect: NoSchedule
                  key: nodeA
                  operator: Equal
                t2:
                  effect: NoSchedule
                  key: app
                  operator: Equal
                  value: storage
      

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

Additional details and note to upgrade and uninstall are available on Project Tracker Wiki.

0.8.1

23 Feb 12:59
Compare
Choose a tag to compare
0.8.1 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.1.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

New Capabilities

  • Support for new Storage Policies:
    • TargetTolerations ( Applicable to both cStor and Jiva volumes) (openebs-archive/maya#921). TargetTolerations policy can be used to allow scheduling of cStor or Jiva Target Pods on nodes with taints. The TargetTolerations can be specified in the StorageClass as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes.
      annotations:
        cas.openebs.io/config: |
          - name: TargetTolerations
            value: |
              t1:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
              t2:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoExecute"    
      
    • ReplicaTolerations ( Applicable to Jiva volumes) (openebs-archive/maya#921). ReplicaTolerations policy can be used to allow scheduling of Jiva Replica Pods on nodes with taints. The ReplicaTolerations can be specified in the StorageClass as follows, where t1, t2 represent the taints and the conditions as expected by Kubernetes
      annotations:
        cas.openebs.io/config: |
          - name: ReplicaTolerations
            value: |
              t1:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoSchedule"
              t2:
                key: "key1"
                operator: "Equal"
                value: "value1"
                effect: "NoExecute"    
      
    • TargetNodeSelector policy can be used with cStor volumes as well(openebs-archive/maya#914). TargetNodeSelector policy can be specified in the Storage Class as follows, to pin the targets to certain set of Kuberenetes nodes labelled as node: storage-node :
      annotations:
         openebs.io/cas-type: cstor
         cas.openebs.io/config: |
          - name: TargetNodeSelector
            value: |-
              node: storage-node
      
    • ScrubImage (Applicable to Jiva Volumes) (openebs-archive/maya#936). After the Jiva volumes are deleted, a Scrub job is scheduled to clear the data. The container used to complete the scrub job is available at quay.io/openebs/openebs-tools:3.8. For deployments where the images can't be downloaded from the Internet, this image can be hosted locally and the location specified using the ScrubImage policy in the StorageClass as follows:
      annotations:
        cas.openebs.io/config: |
          - name: ScrubImage
            value: localrepo/openebs-tools:latest
      

Enhancements

  • Enhanced the documentation for better readability and revamped the guides for cStor Pool and Volume provisioning.
  • Enhanced the quorum handling logic in cStor volumes to reach quorum in a quicker way by optimizing the retries and timeouts required to establish the quorum.(mayadata-io/cstor#182)
  • Enhanced the cStor Pool Pods to include a Liveness check to fail fast if the underlying disks have been detached. (openebs-archive/maya#894)
  • Enhanced the cStor Volume - Target Pods to get reschedule faster in the event of a node failure. (openebs-archive/maya#894)
  • Enhanced the cStor Volume Replica placement to distribute randomly between the available Pools. Prior to this fix, the Replicas were always placed on the first available Pool. In case the Volumes are launched with Replica count of 1, all the replicas were scheduled onto the first Pool only. (openebs-archive/maya#910)
  • Enhanced the Jiva and cStor provisioner to set the upgrade strategy in respective deployments to recreate. (openebs-archive/maya#923)
  • Enhanced the node-disk-manager to fetch additional details about the underlying disks via openSeaChest. The details will be fetched for devices that support them. (openebs-archive/node-disk-manager#185)
    A new section called “Stats” is added that will include the information like:
    Stats:
      Device Utilization Rate:  0
      Disk Temperature:
        Current Temperature:   0
        Highest Temperature:   0
        Lowest Temperature:    0
      Percent Endurance Used:  0
      Total Bytes Read:        0
      Total Bytes Written:     0
    
  • Enhanced the node-disk-manager to add additional information to the DiskCRs like if the disks is partitioned or has a filesystem on it. (openebs-archive/node-disk-manager#197)
    FileSystem and Partition details will be included in the Disk CR as follows:
    partitionDetails:
      - fileSystemType: None
        partitionType: "0x83"
      - fileSystemType: None
        partitionType: "0x8e"
    
    If the disk is formatted as a whole with a filesystem
    fileSystem: ext4
    
  • Enhanced the Disk CRs to include a property called managed, that will indicate whether node-disk-manager should modify the Disk CRs. When managed is set to false, node-disk-manager will not update the status of the Disk CRs. This is helpful in cases, where administrators would like to create Disk CRs for devices that are not yet supported by node-disk-manager like partitioned disks or nvme devices. A sample YAML for specifying a custom disk can be found here. (openebs-archive/node-disk-manager#192)
  • Enhance the debuggability of cstor and jiva volumes by adding additional details about the IOs in the CLI commands and logs. For more details check:
  • Enhanced uZFS to include the zpool clear command (mayadata-io/cstor#186) (mayadata-io/cstor#187)
  • Enhanced the OpenEBS CRDs to include custom columns to be displayed in kubectl get .. output of the CR. This feature requires K8s 1.11 or higher. (openebs-archive/maya#925)
    The sample output looks as follows:
    $ kubectl get csp -n openebs
    NAME                     ALLOCATED   FREE    CAPACITY    STATUS    TYPE       AGE
    sparse-claim-auto-lja7   125K        9.94G   9.94G       Healthy   striped    1h
    
    $ kubectl get cvr -n openebs
    NAME                                                              USED  ALLOCATED  STATUS    AGE
    pvc-9ca83170-01e3-11e9-812f-54e1ad0c1ccc-sparse-claim-auto-lja7   6K    6K         Healthy   1h
    
    $ kubectl get cstorvolume -n openebs
    NAME                                        STATUS    AGE
    pvc-9ca83170-01e3-11e9-812f-54e1ad0c1ccc    Healthy   4h
    
    $ kubectl get disk
    NAME                                      SIZE          STATUS   AGE
    sparse-5a92ced3e2ee21eac7b930f670b5eab5   10737418240   Active   10m
    

Major Bugs Fixed

  • Fixed an issue where cStor target was not rebuilding the data onto replicas, after the underlying cStor Pool was recreated with new disks. This scenario occurs in Cloud Provider or Private Cloud with Virtual Machine deployments, where ephemeral disks are used to create cstor pools and after a node reboot, the node comes up with new ephemeral disks. (mayadata-io/cstor#164) (openebs-archive/maya#899)
  • Fixed an issue where cstor volume causes timeout for iscsi discovery command and can potentially trigger a K8s vulnerability that can bring down a node with high RAM usage. (openebs-archive/istgt#231)
  • Fixed an issue where cStor iSCSI target was not confirming to iSCSI protocol w.r.t immediate bit during discovery phase. (openebs-archive/istgt#231)
  • Fixed an issue where some of the internal snapshots and clones created for cStor Volume rebuild purpose were not cleaned up. (mayadata-io/cstor#200)
  • Fixed an issue where Jiva Replica Pods continued to show running even when the underlying disks were detached. The error is now handled and Pod is restarted. (#1387)
  • Fixed an issue where Jiva Replicas that have large number of sparse extent files created would timeout in connecting to their Jiva Targets. This will cause data unavailability, if the Target can’t establish quorum with already connected replicas. (openebs-archive/jiva#184)
  • Fixed an issue where node-disk-manager pods were not getting upgraded to latest version, even after the image version was changed. (openebs-archive/node-disk-manager#200)
  • Fixed an issue where ndm would get into CrashLoopBackoff when run in unprivileged mode (openebs-archive/node-disk-manager#198)
  • Fixed an issue with mayactl in displaying cStor volume details, when cStor target is deployed in its PVC namespace. (openebs-archive/maya#891)

Backwards Incompatibilities

  • From 0.8.0:
    • mayactl snapshot commands are deprecated in favor of the kubectl approach of taking snapshots. snapshot` commands.
  • For previous relea...
Read more

0.8

07 Dec 05:31
Compare
Choose a tag to compare
0.8 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
  • NDM runs as a privileged pod since it needs to access the device information. Please make the necessary changes to grant access to run in privileged mode. For example, when running in RHEL/CentOS, you may need to set the security context appropriately.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.0.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

New Capabilities

  • Support for creating instant Snapshots on cStor volumes that can be used for both data protection or data warm-up usecases. The snapshots can be taken using kubectl by providing an VolumeSnapshot YAML as shown below:

    apiVersion: volumesnapshot.external-storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:  
      name: <snapshot-name>
      namespace: default
    spec:
      persistentVolumeClaimName: <cstor-pvc-name>
    

    cStor Volume Snapshot creation is controlled by the cStor Target Pod by flushing all the pending IOs to the Replicas and requesting each of the Replica to take snapshot. cStor Volume Snapshot can be taken as long as two Replicas are Healthy.

    Since snapshots can be managed via kubectl, you can setup your own K8s cron job that can take period snapshots etc.,

  • Support for creating a clone PVs from a previously taken snapshot. Clones can be used for both recovering data from a previously taken snapshot or for optimizing the application startup time. Application startup time can be reduced in usecases where an application pod requires some kind of seed data to be available. With clones support, user can setup a seed volume and fill it with the data. Create a snapshot with the seed data. When the application are launched, the PVCs can be setup to create Cloned PVs from the seed data snapshot. cStor Clones are also optimized for minimizing the capacity overhed. cStor Clones are reference based and need capacity only for new or modified data.

    Clones can be created using the PVC YAML as shown below:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <cstor-clone-pvc-name>
      namespace: default
      annotations:
        snapshot.alpha.kubernetes.io/snapshot: <snapshot-name>
    spec:
      storageClassName: openebs-snapshot-promoter
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi
    

    Note that, the requested storage specified (like 5Gi in the above example) should match the requested storage on the <cstor-pvc-name>. <cstor-pvc-name> is the PVC on which the snapshot was taken.

  • Support for providing the runtime status of the cStor Volumes and Pools via kubectl describe commands. For example:

    • cStor Volume Status can be obtained by kubectl describe cstorvolume <cstor-pv-name> -n <openebs-namespace>. The status reported will contain information as shown below and more:
      Status:
        Phase:  Healthy
        Replica Statuses:
          Replica Id:           15041373535437538769
          Status:               Healthy
          Up Time:              14036
          Replica Id:           6608387984068912137
          Status:               Healthy
          Up Time:              14035
          Replica Id:           17623616871400753550
          Status:               Healthy
          Up Time:              14034
      
    • Similarly, each cStor Pool status can also be fetched by a kubectl describe csp <pool-name> -n <openebs-namespace>. The details shown are:
      status:
        capacity:
          free: 9.62G
          total: 9.94G
          used: 322M
        phase: Healthy
      
    • The above describe command for cStor Volume - can also show details like the number of IOs currently inflight to Replica and details of how the Volume status has changed via Kubernetes events.
    • The interval between updating the status can be configured via Storage Policy - ResyncInterval. The default sync interval is 30s.
  • The status of the Volume or Pool can be in - Init, Healthy, Degraded, Offline, Error.
  • Support for new Storage Policies for cStor Volumes such as:

    • Target Affinity: (Applicable to both both jiva and cStor Volumes) The stateful workloads access the OpenEBS Storage by connecting to the Volume Target Pod. This policy can be used to co-locate volume target pod on the same node as workload to avoid conditions like:

      • network disconnects between the workload node and target node
      • shutting down of the node on which volume target pod is scheduled for maintenance.
        In the above cases, if the restoration of network, pod or node takes more than 120 seconds, the workload loses connectivity to the storage.
        This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. User will need to add the following label to both Application and PVC.
      labels:
        openebs.io/target-affinity: <application-unique-label>
      

      Example of using this policy can be found here.

      Note that this Policy only applies to Deployments or StatefulSets with single workload instance.

    • Target Namespace: (Applicable to only cStor volumes). By default the cStor target pods are scheduled in a dedicated openebs namespace. The target pod also is provided with openebs service account so that it can access the Kubernetes Custom Resource called CStorVolume and Events.
      This policy, allows the Cluster administrator to specify if the Volume Target pods should be deployed in the namespace of the workloads itself. This can help with setting the limits on the resources on the target pods, based on the namespace in which they are deployed.
      To use this policy, the Cluster administrator could either use the existing openebs service account or create a new service account with limited access and provide it in the StorageClass as follows:

        annotations:
          cas.openebs.io/config: |
            - name: PVCServiceAccountName
              value: "user-service-account"
      

      The sample service account can be found here

  • Support for sending anonymous analytics to Google Analytics server. This feature can be disabled by setting the maya-apiserver environment flag - OPENEBS_IO_ENABLE_ANALYTICS to false. Very minimal information like the K8s version and the type of volumes being deployed is collected. No sensitive information like the names or IP addresses are collected. The details collected can be viewed here.

Enhancements

  • Enhance the metrics reported from cStor and jiva Volumes to include target and replica status.
  • Enhance the volume metrics exporter to provide metrics in json format.
  • Enhance the maya-apiserver API to include a stats api that will pass through the request to the respective volume exporter metrics API.
  • Enhance cStor storage engine to be resilient against replica failures and cover several corner cases associated with rebuild replica.
  • Enhance jiva storage engine to clear up the space occupied by temporary snapshots taken by the replicas during replica rebuild.
  • Enhance jiva storage engine to support sync and unmap IOs.
  • Enhance CAS Templates to allow invoking REST API calls to non Kubernetes services.
  • Enhance CAS Templates to support an option to disable a Run Task.
  • Enhance CAS Templates to include .CAST object which will be available for Run Tasks. .CAST contains information like openebs and kubernetes versions.
  • Enhance the maya-apiserver installer code to remove the dependency on the config map and to determine and load the CAS Templates based on maya-apiserver version. When maya-apiserver is upgraded from 0.7 to 0.8 - a new set of default CAS Templates will be available for 0.8.
  • Enhance mayactl (CLI) to include bash or zsh completion support. User needs to run : source <(mayactl completion bash).
  • Enhance the volume provisioning to add Prometheus annotations for scrape and port to the volume target pods.
  • Enhance the build scripts to push commit tagged images and also add support for GitLab based CI.
  • Enhance the CI scripts in each of the repos to cover new features.
  • 250+ PRs merged from the community fixing the documentation, code style/lint and add missing unit tests across various repositories.

Major Bugs Fixed

  • Fixed an issue where cStor pool can become inaccessible if two pool pods attempt to access the same disks. This can happen during pool pod termination/eviction, followed by immediately scheduling a new pod on the same node.
  • Fixed an issue where cStor pool can restart if one of the cstor volume target pod is restarted.
  • Fixed an issue with auto-creation of cStor Pools using SPC and type as mirrored. The type wa...
Read more

0.7.2

19 Nov 18:30
Compare
Choose a tag to compare
0.7.2 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.2.yaml

Using helm stable charts

helm repo update
helm install  --namespace openebs --name openebs stable/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

Minor enhancements

  • Support for clearing space used by the Jiva Replica after the Jiva Volume is deleted. The space reclaim is done by scheduling a Kubernetes Job. Note: If your cluster setup requires lots of OpenEBS volumes to be created and deleted, you will need to setup a cron job to clear the completed jobs. This is required, till Kubernetes feature TTL Mechanism for Finished Jobs is supported.
  • Support for a storage policy that can disable the Jiva Volume Space reclaim. Please add the following to your StorageClasses, if you would like to disable Jiva Volume Space reclaim (or in other words - retain the volume data post PVC deletion).
      cas.openebs.io/config: |
        - name: RetainReplicaData
          enabled: true  
    
  • Support for a storage policy to allow scheduling the Jiva Target Pods on the same node as the Application Pod. This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. You will need to add the following label to both Application and PVC.
    labels:
      openebs.io/target-affinity: <application-unique-label>
    

Bug Fixes

  • Fixed an issue where internal snapshots created for rebuilding were getting deleted in case of errors in opening or holding the snapshots.
  • Fixed an issue in sending cStor volume metrics to Prometheus like uptime, read/write block count and read/write time(ns)

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

0.7.1

01 Nov 03:00
Compare
Choose a tag to compare
0.7.1 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.1.yaml

Using helm stable charts

helm install  --namespace openebs --name openebs stable/openebs

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

For more details refer to the documentation at: https://docs.openebs.io/

Change Summary

Minor enhancements

  • Support for using OpenEBS PVs as Block Devices for Application Pods

Bug Fixes

  • Fixed an issue with PVs not getting created when capacity had "i" suffix
  • Fixed an issue with cStor Target Pod stuck in terminating state due to shared hostPath
  • Fixed an issue with FSType from StorageClass not being configured on PV
  • Fixed an issue with NDM discovering capacity of disks via CDB16
  • Fixed an issue with PV name generation exceeding 64 characters. PVC UUID will be used as PV Name.
  • Fixed an issue with cStor Pool Pod terminating when there is an abrupt connection break
  • Fixed an issue with cStor Volume clean-up failure blocking new volumes from being created.

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

v0.7

09 Sep 01:33
Compare
Choose a tag to compare
v0.7 Pre-release
Pre-release

Getting Started

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure that you have completed the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
  • Make sure iSCSI Initiator is installed on the Kubernetes nodes.
  • NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.

Using kubectl

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0.yaml

Using OpenEBS Helm Charts (will be deprecated in the coming releases)

helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs

For more details refer to the documentation at: https://docs.openebs.io/

Note: Kubernetes stable/openebs helm chart and other charts still point to 0.6 and efforts are underway to update them to 0.7.

Quick Summary on changes

  • Node Disk Manager that helps with discovering block devices attached to nodes
  • Alpha support for cStor Storage Engines
  • Updated CRDs for supporting cStor as well as pluggable storage control plane
  • Jiva Storage Pool called default and StorageClass called openebs-jiva-default
  • cStor Storage Pool Claim called cstor-sparse-pool and StorageClass called openebs-cstor-sparse
  • There has been a change in the way volume storage policies can be specified with the addition of new policies like:
    • Number of Data copies to be made
    • Specify the nodes on which the Data copies should be persisted
    • Specify the CPU or Memory Limits per PV
    • Choice of Storage Engine : cStor or Jiva

Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs

Detailed release notes are maintained in Project Tracker Wiki.

Limitations

  • cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period.
  • Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
  • For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
  • This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
  • While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
  • The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
  • The current version of OpenEBS volumes are not optimized for performance sensitive applications.

For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.

v0.7-RC2

29 Aug 15:40
Compare
Choose a tag to compare
v0.7-RC2 Pre-release
Pre-release

Please Note: This is a release candidate build. If you are looking at deploying from a stable release, please follow the instructions at Quick Start Guide.

Getting Started with OpenEBS v0.7-RC2

Prerequisite to install

  • Kubernetes 1.9.7+ is installed
  • Make sure you run the following kubectl command with cluster admin context. The installation will involve create a new Service Account and assigned to OpenEBS components.

Install and Setup

kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0-RC2.yaml

The above command will install OpenEBS Control Plane components and all the required Kubernetes CRDs. With 0.7, the following new services will be installed:

  • Node Disk Manager that helps with discovering block devices attached to nodes
  • Configuration Files required for supporting both Jiva and cStor Storage Engines
  • A default Jiva Storage Pool and a StorageClass called openebs-standard
  • A default cStor Storage Pool and a StorageClass called openebs-cstor-sparse

You are all set!

You can now install your Stateful applications that make use of either of the above StorageClasses or you can create a completely new StorageClass that can be configured with Storage Policies like:

  • Number of Data copies to be made
  • Specify the nodes on which the Data copies should be persisted
  • Specify the CPU or Memory Limits per PV
  • Choice of Storage Engine : cStor or Jiva

Some of the sample Storage Class and PVC configurations can be found here: Sample YAMLs

Additional details and release notes are available on Project Tracker Wiki.

v0.7-RC1

20 Aug 10:20
Compare
Choose a tag to compare
v0.7-RC1 Pre-release
Pre-release

Getting Started

Prerequisite to install

Make sure that user is assigned with cluster-admin clusterrole to run the below provided install steps.

Using kubectl

Install the 0.7.0-RC1 OpenEBS with CAS Templates.

kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-operator-0.7.0-RC1.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-pre-release-features-0.7.0-RC1.yaml

Download the following file, update the disks and apply to create cStor Pools.

wget https://raw.githubusercontent.com/openebs/store/master/docs/openebs-config-0.7.0-RC1.yaml
kubectl apply -f openebs-config-0.7.0-RC1.yaml