Db2u is here as a technical preview! I’ll cover the basics of how to set it up for a local sandbox, and leave the details and whys to future articles. IBM has stated that db2u will be available in GA later this year on Amazon EKS and Azure EKS, but meanwhile, I’ve got the technical preview up and running on my MacBook Pro!
The tutorial for the technical preview was published by Baheer Kamal, an IBMer, on Medium in January. I’d recommend reading it in detail for all of the details. He understands all of this on a much deeper level than I do. He’s also sometimes on the Db2 discord if you have feedback. There are a few typos in the Medium article – notably a missing namespace in a couple of places and a missing dash that might trip you up if you’re not familiar with Kubernetes and associated technologies.
Setting up db2u locally
I’m sharing the steps that I used to run db2u on my computer. Note that this was done on a Mac, and also that this is an old enough Mac that it does NOT have an M1 chip. I am 99% sure that this will fail on an M1. I and others are pushing IBM to get working on the M1 architecture, but it has not made their list of priorities yet. Please be vocal with IBM if this is something that you also want.
Kubernetes
First, note that, whatever you call the Kubernetes that comes with Docker, is not enough. You’ll need MiniKube to follow the directions provided by the tutorial. I also know someone who got it working on k3d.
MiniKube
MiniKube is easy to work with. If you go to their website, it will give you the exact command you need to download and install it based on the computer you’re working from. Here are the commands I used, along with the output I saw:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 66.6M 100 66.6M 0 0 13.8M 0 0:00:04 0:00:04 --:--:-- 14.8M
$ sudo install minikube-darwin-amd64 /usr/local/bin/minikube
Password:
$ minikube start
😄 minikube v1.25.1 on Darwin 10.15.6
✨ Automatically selected the docker driver. Other choices: hyperkit, ssh
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.1 preload ...
> preloaded-images-k8s-v16-v1...: 504.42 MiB / 504.42 MiB 100.00% 8.72 MiB
> gcr.io/k8s-minikube/kicbase: 378.98 MiB / 378.98 MiB 100.00% 6.49 MiB p/
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/local/bin/kubectl is version 1.21.2, which may have incompatibilites with Kubernetes 1.23.1.
▪ Want kubectl v1.23.1? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
After that, I added this alias to my .bash_profile:
alias kubectl="minikube kubectl --"
This works for me because I don’t use kubectl locally, but use it through the Rancher CLI.
OLM
Next, you need to download and install the Operator Lifecycle Manager (OLM). My understanding of all the moving pieces here is still developing, but as I understand it, this provides the framework needed for the db2 operator. That looks something like this:
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/install.sh | bash -s v0.20.0
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/olmconfigs.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorconditions.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com condition met
namespace/olm created
namespace/operators created
serviceaccount/olm-operator-serviceaccount created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
olmconfig.operators.coreos.com/cluster created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
operatorgroup.operators.coreos.com/global-operators created
operatorgroup.operators.coreos.com/olm-operators created
clusterserviceversion.operators.coreos.com/packageserver created
catalogsource.operators.coreos.com/operatorhubio-catalog created
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Package server phase: Installing
Package server phase: Succeeded
deployment "packageserver" successfully rolled out
The output there tells me it is also creating a namespace and other needed objects. I need to invest some time here to understand all of the components.
Next, we use YAML to pull and create the db2u operator. Here, I’m pasting in the YAML, so multiple lines of text. There aren’t really changes or tweaks to make here as I understand it.
$ kubectl create -f - << EOF
> ---
> apiVersion: v1
> kind: Namespace
> metadata:
> name: db2u
> ---
> apiVersion: operators.coreos.com/v1alpha1
> kind: CatalogSource
> metadata:
> name: db2u-catalog
> namespace: olm
> spec:
> sourceType: grpc
> image: icr.io/db2u/db2-catalog@sha256:e6ac7746ae2d6d3bb4fbf75144d94e400db887c1702b19e08b9533752d896178
> displayName: Db2u Operators
> publisher: IBM Db2
> ---
> apiVersion: operators.coreos.com/v1alpha2
> kind: OperatorGroup
> metadata:
> name: db2u-oprator-og
> namespace: db2u
> spec:
> targetNamespaces:
> - db2u
> ---
> apiVersion: operators.coreos.com/v1alpha1
> kind: Subscription
> metadata:
> name: db2u-operator-v2-0-0-sub
> namespace: db2u
> spec:
> name: db2u-operator
> source: db2u-catalog
> sourceNamespace: olm
> installPlanApproval: Automatic
> EOF
namespace/db2u created
catalogsource.operators.coreos.com/db2u-catalog created
operatorgroup.operators.coreos.com/db2u-oprator-og created
subscription.operators.coreos.com/db2u-operator-v2-0-0-sub created
I can see here that it is creating more resources for me. This will take a bit to get fully created and ready. Wait until the below command shows both a “STATUS” of Running and “READY” of 1/1:
$ while true; do sleep 30; kubectl get -n db2u pod; done
No resources found in db2u namespace.
NAME READY STATUS RESTARTS AGE
db2u-operator-manager-6f5b9b496-56hd7 0/1 ContainerCreating 0 10s
NAME READY STATUS RESTARTS AGE
db2u-operator-manager-6f5b9b496-56hd7 0/1 Running 0 40s
NAME READY STATUS RESTARTS AGE
db2u-operator-manager-6f5b9b496-56hd7 1/1 Running 0 71s
You can see that this took approximately 71 seconds or a bit less for me.
Running the Db2 Image (Creating Pod, Container, Instance and Database)
Now we’re getting to the parts that are usually the DBA’s responsibility when building traditional systems. The next thing to do is to create a CRD in YAML that defines what we want created and how we want it created. This is also where I expect to have to make changes to configure my system a bit differently. I haven’t done those experiments yet, but I think this is where I’ll be doing it. I’ll first provide the command as I used it, and then provide the results of running it separately for clarity.
kubectl create -f - << EOF
apiVersion: db2u.databases.ibm.com/v1
kind: Db2uCluster
metadata:
name: demo
namespace: db2u
spec:
license:
accept: true
version: "11.5.7.0"
size: 1
environment:
dbType: db2oltp
database:
name: bludb
storage:
- name: meta
type: "create"
spec:
storageClassName: "standard"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
- name: data
type: "create"
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
If you've worked with Db2, I'm sure you can see things here that might be interesting to change. I know there are different types for oltp and wh (BLU - maybe DPF?) databases. I could change the database name here, and if I weren't just starting to experiment, I would - BLUDB
doesn't make sense for an OLTP database. Also note that there are a couple of different storage resources here - one that is allowing access by multiple containers and one that is not. I'd speculate that shared area is something that is more used by a warehousing implementation, or by HADR if you set that up. I must resist the urge to dig into the details on HADR and how to deal with topics like anti-affinity that we'd need to run that properly on Kubernetes.
Here's what it looks like when I ran that:
$ kubectl create -f - << EOF
> apiVersion: db2u.databases.ibm.com/v1
> kind: Db2uCluster
> metadata:
> name: demo
> namespace: db2u
> spec:
> license:
> accept: true
> version: "11.5.7.0"
> size: 1
> environment:
> dbType: db2oltp
> database:
> name: bludb
> storage:
> - name: meta
> type: "create"
> spec:
> storageClassName: "standard"
> accessModes:
> - ReadWriteMany
> resources:
> requests:
> storage: 10Gi
> - name: data
> type: "create"
> spec:
> storageClassName: "standard"
> accessModes:
> - ReadWriteOnce
> resources:
> requests:
> storage: 20Gi
> EOF
db2ucluster.db2u.databases.ibm.com/demo created
This part takes even longer to spin up. I'd assume that is because this is installing db2 and creating an instance and all of that. To watch the progress of this use a command like the one below. You're looking for the status of Completed on the pod with "restore-morph" in its name.
$ while true; do sleep 30; kubectl get po -n db2u; done
NAME READY STATUS RESTARTS AGE
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 6m57s
NAME READY STATUS RESTARTS AGE
c-demo-instdb-fqsl8 0/1 ContainerCreating 0 7s
c-demo-ldap-6b9dcd85b5-6qs6x 0/1 ContainerCreating 0 7s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 7m27s
NAME READY STATUS RESTARTS AGE
c-demo-instdb-fqsl8 0/1 ContainerCreating 0 37s
c-demo-ldap-6b9dcd85b5-6qs6x 0/1 ContainerCreating 0 37s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 7m57s
NAME READY STATUS RESTARTS AGE
c-demo-etcd-0 0/1 ContainerCreating 0 8s
c-demo-instdb-fqsl8 0/1 Completed 0 68s
c-demo-ldap-6b9dcd85b5-6qs6x 0/1 Running 0 68s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 8m28s
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 Init:0/2 0 8s
c-demo-etcd-0 0/1 Running 0 38s
c-demo-instdb-fqsl8 0/1 Completed 0 98s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 98s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 8m58s
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 PodInitializing 0 38s
c-demo-etcd-0 1/1 Running 0 68s
c-demo-instdb-fqsl8 0/1 Completed 0 2m8s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 2m8s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 9m28s
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 PodInitializing 0 68s
c-demo-etcd-0 1/1 Running 0 98s
c-demo-instdb-fqsl8 0/1 Completed 0 2m38s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 2m38s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 9m58s
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 PodInitializing 0 99s
c-demo-etcd-0 1/1 Running 0 2m9s
c-demo-instdb-fqsl8 0/1 Completed 0 3m9s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 3m9s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 10m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 PodInitializing 0 2m9s
c-demo-etcd-0 1/1 Running 0 2m39s
c-demo-instdb-fqsl8 0/1 Completed 0 3m39s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 3m39s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 10m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 PodInitializing 0 2m39s
c-demo-etcd-0 1/1 Running 0 3m9s
c-demo-instdb-fqsl8 0/1 Completed 0 4m9s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 4m9s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 11m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 0/1 Running 0 3m10s
c-demo-etcd-0 1/1 Running 0 3m40s
c-demo-instdb-fqsl8 0/1 Completed 0 4m40s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 4m40s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 12m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 1/1 Running 0 3m40s
c-demo-etcd-0 1/1 Running 0 4m10s
c-demo-instdb-fqsl8 0/1 Completed 0 5m10s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 5m10s
c-demo-restore-morph-r788c 1/1 Running 0 17s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 12m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 1/1 Running 0 4m10s
c-demo-etcd-0 1/1 Running 0 4m40s
c-demo-instdb-fqsl8 0/1 Completed 0 5m40s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 5m40s
c-demo-restore-morph-r788c 1/1 Running 0 47s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 13m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 1/1 Running 0 4m40s
c-demo-etcd-0 1/1 Running 0 5m10s
c-demo-instdb-fqsl8 0/1 Completed 0 6m10s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 6m10s
c-demo-restore-morph-r788c 1/1 Running 0 77s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 13m
NAME READY STATUS RESTARTS AGE
c-demo-db2u-0 1/1 Running 0 5m10s
c-demo-etcd-0 1/1 Running 0 5m40s
c-demo-instdb-fqsl8 0/1 Completed 0 6m40s
c-demo-ldap-6b9dcd85b5-6qs6x 1/1 Running 0 6m40s
c-demo-restore-morph-r788c 0/1 Completed 0 107s
db2u-operator-manager-6f5b9b496-xwwr8 1/1 Running 0 14m
In my case, this took about 7-8 minutes to get through everything.
You now have a usable database up and running on db2u!
Using db2u
There are some details you'll need to get in order to actually connect to the database.
Port
First, depending on how you're using Db2, you may need to know the nodePort assigned. Kubernetes assigns this for you, and I don't know if it's possible to change or control it at all. The command to find it is:
$ kubectl get svc c-demo-db2u-engn-svc -n db2u -o jsonpath='{.spec.ports[?(@.name=="legacy-server")].nodePort}{"\n"}'
30161
This was one of the places in the Medium article where the namespace was missing from the command. If you don't add the namespace, as I have above, you will get an error that looks like this:
$ kubectl get svc c-demo-db2u-engn-svc -o jsonpath='{.spec.ports[?(@.name=="legacy-server")].nodePort}{"\n"}'
Error from server (NotFound): services "c-demo-db2u-engn-svc" not found
Setting up Network Connectivity
This command, you'll have to abandon the session you start it from. It runs in the foreground and not the background. This may be one of the disadvantages of using MiniKube. I know someone who got this working with k3d specifically so he didn't have to leave something running in the foreground to be able to connect. The warning message does indicate this may have something to do with running it on Mac. Also note that the medium article had a minor syntax error in this command, and I've corrected it below.
$ minikube service --url c-demo-db2u-engn-svc -n db2u
🏃 Starting tunnel for service c-demo-db2u-engn-svc.
|-----------|----------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------------|-------------|------------------------|
| db2u | c-demo-db2u-engn-svc | | http://127.0.0.1:59145 |
| | | | http://127.0.0.1:59146 |
|-----------|----------------------|-------------|------------------------|
http://127.0.0.1:59145
http://127.0.0.1:59146
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
Password
Finally, a password is created for db2inst1 in the above process. You can change the CRD to set a password, and in a real implementation, I'd want that to be passed in as a secret from somewhere. In this case we can extract that password. This is another command that was missing the namespace in the Medium article and I've added it.
$ echo $(kubectl get secret c-demo-instancepassword -n db2u -o jsonpath={.data.password} | base64 -d)
YSFOgoNlb9Gauij
I do find it interesting that the password can be extracted in this manner.
Connecting
Now you have all the information you need to connect to the database. Here's a simple python script I used to test:
import ibm_db
import ibm_db_dbi
conn_str='database=bludb;hostname=localhost;port=59145;protocol=tcpip;uid=db2inst1;pwd=YSFOgoNlb9Gauij'
ibm_db_conn = ibm_db.connect(conn_str,'','')
#Drop the table if exist
drop_table="drop table if exists mytable"
ibm_db.exec_immediate(ibm_db_conn, drop_table)
#Create table
create="create table mytable(id int, name varchar(50))"
ibm_db.exec_immediate(ibm_db_conn, create)
#Insert data into table mytable using prepared statement
insert = "insert into mytable values(?,?)"
params=((1,'Sanders'),(2,'Pernal'),(3,'OBrien'))
stmt_insert = ibm_db.prepare(ibm_db_conn, insert)
ibm_db.execute_many(stmt_insert,params)
# Fetch data
select="select id, name from mytable"
stmt_select = ibm_db.exec_immediate(ibm_db_conn, select)
for x in range(3):
cols = ibm_db.fetch_tuple( stmt_select )
print("%s, %s" % (cols[0], cols[1]))
ibm_db.close(ibm_db_conn)
One thing to note here - the port number I'm using is not the NodePort, but is the value supplied from the MiniKube service command. The NodePort may be more critical to know in other scenarios.
Documentation
Since db2u outside of OpenShift is still in technical preview, the complete documentation does not yet exist. I'm assured it will when they go GA with it later this year. For now, check out the db2u documentation for OpenShift to find some details you might need.
Summary
I'm excited for working more with db2u. This is just the beginning, but I wanted to share my progress so far. Let me know your successes and failures!
Hi Ember, is there a way to have DB2 on Docker without MiniKube?
Yes. On dockerhub, you can use ibmcom/db2. It has some restrictions, though, and will never be supported for production environments.
Hi Ember,
This example does not work with most recent minikube setup. Using the samen Kubernetes version as you did, all is fine. ( minikube start –kubernetes-version=v1.23.1)
Error was: Failed to list *v1beta1.CronJob: the server could not find the requested resource
v1beta1.CronJob is depricated in the recent versions of Kubernetes
Regards Erik
Thanks, I’d recommend sharing that feedback with IBM either on the Medium Article or the Db2 Discord.