Wednesday, 18 July 2018

Introduction to Docker CLI

We will start with some common commands. Then, we'll take a peek at commands that are used for Docker images. We will then take a dive into commands that are used for containers.
The first command we will look at is one of the most useful commands in Docker and in any command-line utility you may use. This is the help command. This is run simply by executing the command, as follows:
$ docker --help
The preceding command will give you a full list of all the Docker commands at your disposal and a brief description of what each command does. For further help with a particular command, you can run the following command:
$ docker <command> --help
You will then receive additional information about using the command, such as options, arguments, and descriptions for the arguments. You can also use the Docker version command to gather information about what version of Docker you are running:
$ docker version
Client:
 Version:      17.03.0-ce
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   3a232c8
 Built:        Tue Feb 28 08:10:07 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.0-ce
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   3a232c8
 Built:        Tue Feb 28 08:10:07 2017
 OS/Arch:      linux/amd64
 Experimental: false

Docker images management

Let's learn how to view which images you currently have that you can run, and let's also search for images on the Docker Hub. Let's first take a look at the docker images command:
$ docker images
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
centos                                  7                   49f7960eb7e4        6 weeks ago         200 MB
mariadb                                 10.1                e98a88b23fa0        7 weeks ago         400 MB
There are a few important pieces to understand from the output that you see. Let's go over the columns and what is contained in each of them. The first column that you see is the repository column. This column contains the name of the repository, as it exists on the Docker Hub. If you were to have a repository that was from some other user's account. The tag column will show you what tag the image has. Image ID is based off a unique 64 hexadecimal digit string of characters. The last two columns are pretty straightforward, the first being the creation date for the image, followed by the virtual size of the image. The size is very important because you want to keep or use images that are very small in size if you plan to move them around a lot.
So let's take a look at how we can search for images that are on the Docker Hub using the Docker commands. The command that we will be looking at is docker search. With the docker search command, you can search based on the different criteria that you are looking for. For example, we can search for all images with the term, mariadb, in their name and see what is available.
The command would go something like the following:
$ docker search mariadb
NAME                                                      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mariadb                                                   MariaDB is a community-developed fork of M...   2075      [OK]       
If we find an image that we want to use, we can simply pull it using its repository name with the docker pull command, as follows:
$ docker pull <image>
The image will be downloaded and show up in our list when we now run the docker images command that we ran earlier.
With the docker rmi command, you can remove unwanted images from your machine:
$ docker rmi <image>
If you use the -f flag and specify the image’s short or long ID, then this command untags and removes all images that match the specified ID.

Starting containers

Let's first go over the basics of the docker run command and how to run containers. The most basic way to run a container is as follows:
$ docker run -i -t <image>:<tag> /bin/bash
For example, we want run the mariadb image:
$ docker run -i -t mariadb:10.1 /bin/bash
The first -i option, gives us an interactive shell into the running container. The second -t option will allocate a pseudo tty, which when using interactive processes, must be used together with the -I switch.
Once you are comfortable with your container, you can test how it operates in daemon mode:
$ docker run -d <image>:<tag>
If the container is set up correctly and has an entry point setup, you should be able to see the running container by issuing the docker ps command, seeing something similar to the following:
$ docker ps
You can also expose ports on your containers using the -p switch, just like this:
$ docker run -d -p <host_port>:<container_port> <image>:<tag>
Now, there will come a time when containers don't want to behave, and for this, you can see what issues you have using the docker logs command. This command is very straightforward. You specify the container for which you want to see the logs, which is just a redirect from stdout. For this command, you use the container ID or the name of the container from the docker ps output:
$ docker logs <id>

Stopping containers

There are a few commands that we can use to do this. They are docker kill and docker stop. Let's cover them briefly as they are fairly straightforward, but let's look at the difference between docker kill and docker stop. The docker kill command will kill the container immediately.
$ docker kill <container>
For a graceful shutdown of the container, you use the docker stop command.
$ docker stop <container>
When you are testing, you will usually use docker kill, and when you are in your production environments, you will want to use docker stop to ensure that you don't corrupt any data.
With the docker rename command, we can change the name that has been randomly generated for the container. When we used the docker run command, a random name was assigned to our container.
$ docker rename <current_container_name> <new_container_name>
The docker stats command displays a live stream of container(s) resource usage statistics
$ docker stats
CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS
30feb5ef3800        0.04%               628.2 MiB / 31.26 GiB   1.96%               9.41 MB / 31 MB     164 MB / 73.7 kB    0
2fb1bdd70ec2        0.04%               384.8 MiB / 31.26 GiB   1.20%               75.3 MB / 45.3 MB   38.5 MB / 783 MB    0
The docker top command gives us a list of all running processes inside the container.
$ docker top <container>
Lastly, let's cover how we can remove containers. In the same way that we looked at removing images earlier with the docker rmi command, we can use the docker rm command to remove unwanted containers. This is useful if you want to reuse a name you assigned to a container:
$ docker rm <container>

Tuesday, 17 July 2018

Overview of docker architecture

At the very start of the IT revolution, most applications were deployed directly on physical hardware, over the host OS. Because of that single user space, runtime was shared between applications. The deployment was stable, hardware-centric, and had a long maintenance cycle. It was mostly managed by an IT department and gave a lot less flexibility to developers. In such cases, hardware resources were regularly underutilized.
The following diagram shows a Traditional application deployment:

The introduction of the virtual machines (VMs) we emulated the hardware for virtual machines (VMs) and deployed a guest OS on each virtual machine. With virtualization, applications are isolated at VM level and defined by the life cycle of VMs. This gives better return on investment and higher flexibility at the cost of increased complexity and redundancy.


The following diagram shows an application deployment with VMs:
After virtualization, we are now moving towards more application-centric IT. We have removed the hypervisor layer to reduce hardware emulation and complexity. The applications are packaged with their runtime environment and are deployed using containers. Containers are also considered less secure than VMs, because with containers, everything runs on the host OS. If a container gets compromised, then it might be possible to get full access to the host OS. It can be a bit too complex to set up, manage, and automate. These are a few reasons why we have not seen the mass adoption of containers in the last few years, even though we had the technology.

With Docker, containers suddenly became first-class citizens. All big corporations such as Google, Microsoft, Red Hat, IBM, and others are now working to make containers mainstream. Docker was started as an internal project by Solomon Hykes, who is the current CTO of Docker, Inc., at dotCloud. It was released as open source in March 2013 under the Apache 2.0 license. With dotCloud's platform as a service experience, the founders and engineers of Docker were aware of the challenges of running containers. So with Docker, they developed a standard way to manage containers.

Friday, 13 July 2018

How to set and get the environment variables with nodejs

Using nodejs as alternative to bash scripting, It may be useful to know the environment variable management.

Get an environment variable:

 var value_variable = process.env.<variable_name>
Example:
 var java_home = process.env.JAVA_HOME;
 console.log("JAVA HOME: " + java_home);
Output:
 JAVA HOME: /opt/jdk1.6

Set an environment variable:

 process.env['VAR'] = "value";
Example:
 process.env['JAVA_HOME'] = "/opt/jdk1.8";
 var java_home = process.env.JAVA_HOME;
 console.log("JAVA HOME: " + java_home);
Output
 JAVA HOME: /opt/jdk1.8

Thursday, 5 July 2018

Creating an efficient branching system - Git Flow and BPF

The best approach is to leave the Master branch as the latest stable version of your repository and develop the branching system around it.
Git Flow [1] and Branch Per Feature (BPF) [2] are two models based on this approach.

Git Flow

In 2010, a Dutch iOS developer, Vincent Driessen, published the article Git flow. In this article, he presents how he sets up his branch model. His branching strategy starts by creating two main branches:
  • master
  • develop
The master branch is the main branch of the project and will be in the ready-for-production state. They are on the remote repository (origin).
So, whenever you clone the repository on the master branch, you will have the last stable version of the project, which is very important.

The develop branch reflects all the new features for the next release. When the code inside the develop branch is stable (this means that you have done all changes for the next releases and tested it), you reach the stable point on the develop branch. Then, you can merge the develop branch into Master.


Around these two branches, Vincent Driessen also used other branches that can be categorized into three types:
  • Feature branches
  • Release branches
  • Hotfix branches 

Feature branches


A feature branch is named based on what your feature is about and will exist as long as the feature is in development. 
Feature branches only exist in local developer repositories; do not push them on the remote repository. 
When your feature is ready, you can merge your branch feature to develop and delete the branch.
Execute the following steps:


  1. Go back to the develop branch;
  2. Merge the branch to develop by creating a new commit object;
  3. Delete the branch (the branch explains a feature that is now part of the develop branch, so there is no reason to keep it);
  4. Push your changes on the remote dev repository.



The steps described above can be made with the following git commands:
# step 1
$ git checkout develop
# step 2
$ git merge --no-ff featureBranch
# step 3
$ git branch -d featureBranch
# step 4
$ git push origin develop 

Release branches

You will use the release branch to update your code for minor changes between two big releases. It's named the version number of the project.
At this point, an example will be necessary to explain the process. Let's imagine that we released our website and it is tagged as Version 1.0. We are working on the next big release that will include a blog. While developing your next great feature on a feature branch called "blog", you have a minor bug on production. So, we create a release branch from the dev branch, which we will name "release-1.1":
$ git checkout -b release/1.1 develop
We can fix this bug, but before releasing it, there is a tricky part. Fortunately, this is easy to understand.
First, you have to merge this branch release into master:
$ git checkout master
$ git merge --no-ff release/1.1
Then, you can tag your project to the new release version:
$ git tag -a 1.1
You will probably notice that your dev branch didn't include the changes!
To fix this, you have to merge it into the dev:
$ git checkout develop
$ git merge --no-ff release/1.1 
When it's done, delete the release branch:
$ git branch -d release/1.1

Hotfix branches

These kinds of branches are very similar to release branches. It will respond to fixing a critical bug on production.
The goal is to quickly fix a bug while the other team members can work on their features.
For example, your website is tagged as 1.1, and you are still developing the blog feature on the blog branch. You find a huge bug on the slider inside the main page, so you work on the release branch to fix it as soon as possible.
Create a hotfix branch named:
$ git checkout -b hotfix/1.1.1 master
Fix the bug and merge it to master (after a commit, of course):
$ git checkout master
$ git merge --no-ff hotfix/1.1.1
$ git tag -a 1.1.1
Similarly, like the release branch, merge the hotfix branch into the current release branch (if it exists) or dev branch. Then delete it:
$ git checkout develop
$ git merge --no-ff hotfix/1.1.1
$ git branch -d hotfix/1.1.1

Branch Per Feature (BPF)

As mentioned earlier, Git flow might suit your project when you use a GitHub project, but the is not always the case. 
This model was described by Adam Dymitruk in 2012. He tried to combine the power of Git with Continuous Integration.
He gave some tips for a more efficient branching strategy:
  • Divide your project into several sprints. 
  • For each sprint, there are several features to develop. 
  • The features should be small. Develop a small part of the feature, and for each of them, create a dedicated feature branch. So, there will be a lot of branches with few commits in it. 
  • Merge your branch to the dev branch when it's ready. 
  • Use a Continuous Integration tool on a Quality Assurance branch so that you will be notified sooner when something is wrong on your feature. 
  • When it passes the tests, your QA branch is merged into master and you just have to tag the new version.
Ideally every time you start a sprint, create the feature branches and QA.
The aim of this strategy is:
  • All your work is split under feature branches.
  • All feature branches start from master (from the last release). When you start a sprint, you create your feature branches at the same time.
  • Test your code sooner.
The QA branch is like the develop branch from Git flow; you shouldn't deploy it, but you have to recreate it on every release.

Links:

[1] Git Flow: http://nvie.com/posts/a-successful-git-branching-model/
[2] Branch Per Feature: http://dymitruk.com/blog/2012/02/05/branch-per-feature/

Wednesday, 4 July 2018

Create and using a Git repo into a SVN environment

If you want to use Git as your versioning system, you shouldn't only migrate every repository from SVN to Git, but you should also use Git locally. The Git-svn command will help you do this. It so happens that your team doesn't want to change its versioning system, or a project is way too big to migrate on a new versioning system. So, Git has a solution for you; how about using Git features without anyone knowing or caring?
The following diagram explains how to use Git inside an SVN environment. When executing a Git command, the SVN environment will not notice it because the Git-svn command will convert all your commands.

Setting up your repository

You assume that you already have an SVN repository and you want to use Git locally. As a first step, clone the SVN repository using this command:
$ git svn clone -s http://mysvnrepo/svn/myproject myproject_gitsvn_local
The -s option stands for standard layout, which means that your subversion layout has three directories (trunk, branches, and tags). You can, of course, forget this option if your repository does not have a standard layout.
This creates a Git repository under the myproject_gitsvn_local directory that is mapped to the trunk folder of your subversion repository.
As Git doesn't track empty directories, the empty directories under the trunk won't appear inside your Git repository.
Sometimes you might have to clone a big repository. In this case, checking out the commit history will be lengthy because the repository is too big. There is a way to clone it without waiting for a long time. You can do this by cloning the repository with the earlier version of the repository:
$ git svn clone -s -r625:HEAD http://mysvnrepo/svn/myproject myproject_gitsvn_local
There is one last thing to set up. Every file ignored by SVN has to be ignored by Git too. To do this, you have to transfer them into the .gitignore file by using this:
$ git svn show-ignore > .gitignore
There is an alternative method that uses the update-index command:
$ git update-index --assume-unchanged filesToIgnore

Working with Git SVN

Once your repository is ready, you can work on it and start executing Git commands as we saw earlier. Of course, there are some commands to execute when you want to push or pull from the SVN repository. When you want to update your local Git repository, just type this:
$ git svn rebase
To commit back to SVN, use the following command:
$ git svn dcommit
Sooner or later, you will add the .svn folder to the staging area in Git. Hopefully, there is a way to delete it:
$ git status -s | grep .svn | awk "'print $3'} | xargs git rm -cached

Tuesday, 3 July 2018

Managing Git submodules

A git submodule is helpful for a project that requires dependency on another project. For example, this can be a library that was developed by you or another team. It can be hard to manage when the library is updated and you made some custom code inside your project.
Git handles this by using submodules. It allows you to manage a Git repository as a subfolder of another Git repository, which in turn lets you clone a repository isolated from the commits of the current repository.
The following points will be illustrated in below:
  • Adding a submodule.
  • Cloning a project with submodules.
  • Removing a submodule.
  • Using a subtree instead of a submodule.
  • Adding a subproject with a subtree.
  • Contributing on a subtree.

Adding a submodule

Let's imagine you want to add the "myutil" library that helps you development of your feature. The first thing to do is to clone the library's Git repository inside your subfolder:
$ git submodule add https://mygitserver/scm/myutil.git myutil
You now have the myutil project inside the myutil folder.
You can do everything you want inside it, such as add modifications, change the remote repository, push your changes in the remote repository, and so on.
While you add the Git submodule, Git adds two files, myutil and .gitmodules.
Let's see this with the git status command:
$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>…" to unstage)
# new file: .gitmodules
# new file: myutil
Git sees the myutil folder as a submodule, so it won't track the changes if you're not in this folder. An important fact is that Git saves this addition as a nonregular commit of this repository, so if someone clones your website, Git can recreate the same environment.

Cloning a project with submodules

If you clone a project using submodules Git will add all files other than the submodule files:
$ git clone https://mygitserver/scm/myproject.git
$ ls
myfile.txt myutil
$ cd myutil
$ ls
$
The myutil folder is created, but it is empty. You will have to execute these two commands to initialize the import submodule:
$ git submodule init
$ git submodule update
Your repository is now up to date. Using submodules can be interesting if you want to separate and isolate some parts of your code.
If you execute a git pull command on your project, you will not have the last version of the submodule. To do this, you have to execute git submodule update every time you want to update your submodule.

Removing a submodule

To remove a submodule from your project, you have to execute these steps:
  1. Delete the lines of the submodule from the .gitmodules file.
  2. Delete the submodule part from .git/config.
  3. Delete the submodule from Git by executing this command:
    $ git rm -cached submodule_path
  4. Commit and delete the untracked files.

Using a subtree instead of a submodule

The use of git module is not a best practice because you have to use git submodule update every time, and you will probably forget to do this. The second problem is that Git doesn't really handle merging into a submodule. It detects SHA conflicts, but this is all. It's left to you to find out what should be done.
Thankfully, there is the subtree, which is better in a few ways:
  • Easy to manage for light workflow.
  • While you clone a superproject, the subproject's code is available too.
  • Subtree doesn't use files such as .gitmodules. 
  • The most important point is that contents can be modified inside your project without having a copy of the dependency elsewhere.
Git subtree is available since version 1.7.11 of Git delivered in May 2012.

Adding a subproject with a subtree

Firstly, we need to tell Git that we want to include a project as a subtree. We use the git remote command to specify where the remote repository of this subtree is:
$ git remote add –f myutil_remote https://mygitserver/scm/myutil.git
Now, you can add the subtree inside your project using the remote repository:
$ git subtree add --prefix myutil myutil_remote master --squash
This will create the subproject. If you want to update it later, you will have to use the fetch and subtree pull commands:
$ git fetch myutil_remote master
$ git subtree pull myutil myutil_remote master --squash 

Contributing on a subtree

Obviously, you can commit your fixes to the subproject in the local directory, but when you push back upstream, you will have to use another remote repository:
$ git remote add info-myutil https://mygitserver/scm/myutil.git
$ git subtree push --prefix=myutil info-myutil master
Git push using: info-myutil master
Counting objects: 1, done.
Delta compression using up to 1 thread.
Compressing objects: 100% (1/1), done.
Writing objects: 100% (1/1), 170 bytes, done.
Total 1 (delta 1), reused 0 (delta 0)
To https://mygitserver/scm/myutil.git
.................................................... -} master
Git subtree can be an easy way if you have to frequently update your subproject and want to contribute to it with less effort.

Monday, 2 July 2018

JBoss 7.1.1 - java.io.IOException: No space left on device - start failed

 Issue: "java.io.IOException: No space left on device" - JBoss AS 7.1.1

The following exeption could happen during the start of JBoss AS but the file system (or disk partition) is not 100%.  
Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: Failed to mount deployment content
        at org.jboss.as.server.deployment.module.DeploymentRootMountProcessor.deploy(DeploymentRootMountProcessor.java:91) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
        at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
        ... 5 more
Caused by: java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method) [rt.jar:1.6.0_37]
        at java.io.FileOutputStream.write(FileOutputStream.java:282) [rt.jar:1.6.0_37]
        at org.jboss.vfs.VFSUtils.copyStream(VFSUtils.java:442)
        at org.jboss.vfs.VFSUtils.copyStream(VFSUtils.java:422)
        at org.jboss.vfs.VFSUtils.unzip(VFSUtils.java:872)
        at org.jboss.vfs.VFS.mountZipExpanded(VFS.java:536)
        at org.jboss.vfs.VFS.mountZipExpanded(VFS.java:567)

Solution

Remove the temp files:
 $JBOSS_HOME/standalone/tmp/*
 $JBOSS_HOME/standalone/configuration/standalone_xml_history/*
Restart the JBoss AS.

Technology stack

  • OS: centos (linux distribution)
  • JDK 1.6_x
  • JBoss as 7.1.1 

Welcome

Hello everybody, Welcome in my blog called "Information technology archive". Obviously the topics will be related to Informatio...