Thursday, 28 September 2017

Bash commands for Navigation and File Management

Current working directory - pwd

To find out where your home directory is in relationship to the rest of the filesystem, you can use the pwd command. This command displays the directory that we are currently in.
$ pwd

List information about the files - ls

To display the directory that you are in, you use "ls" command.
$ ls
For instance, to list all of the contents in an extended form, we can use the -lflag (for "long" output):
$ ls -l

Change the working directory - cd

Begin by going back to the mydirectory directory by typing this:
$ cd mydirectory
You can use absolute or relative path.

Viewing a file - cat 

Use the "cat" command for read the contents of a file. This command concatenates one or more files to standard output. Example: 
$ cat myfile.txt

Create a file - touch

The "touch" command creates a file in your filesystem. Example:
$ touch myfile.txt

Create a directory - mkdir

The "mkdir" command creates a new directory in your filesystem. Example:
$ mkdir mydirectory
To tell mkdir that it should create any directories necessary to construct a given directory path, you can use the -p option
$ mkdir -p dir1/dir2/mydirectory

Moving and Renaming Files and Directories - mv

You can move a file to a new location using the mv command. For example, you can move myfile into the dir1 directory by typing:
$ mv myfile dir1
So to rename the dir1 directory to directory1:
$ mv dir1 directory1

Copy files and directories - cp

The cp command can make a new copy of an existing file or directory. For example, you can copy myfile.txt into same directory but with a different name (myfile2.txt):
$ cp myfile.txt myfile2.txt
Instead if you can copy a directory, use "-r" option:
$ cp -r mydir new_mydir

Remove a file - rm

The rm command remove a file. For example, you can remove myfile2.txt with the following command:
$ rm myfile2.txt

Remove a directory - rmdir

The rmdir command remove a directory. For example, you can remove new_mydir with the following command:
$ rmdir new_mydir
Alternatively you can use the following command:
$ rm -r new_mydir

Remove all files (recursively)

You can remove recursively all files with a precise name (or regular expression) from a specific path. Example: you can remove all .svn files from a your workspace, so you can execute the following command into workspace directory:

$ find . -name .svn -exec rm -rf {} \;

Find all snapshot version in a maven project
$ find . -name pom.xml | xargs grep "SNAPSHOT"

Disk Usage - du

Disk Usage - report the amount of disk space used by the specified files and for each subdirectory
$ du -h

Internal Links

May be of interest to you:

Monday, 25 September 2017

How to store your git credentials

Use the following command:
 git config --global credential.helper store
Next time when you will be prompted again your credentials, It will be created ".git-credentials" file in your home.
Now you never have to enter your credentials.

Storage format

The .git-credentials file is stored in plaintext. Each credential is stored on its own line as a URL like:
 https://<user>:<password>@<hostname>

Example

Store your password in your home.

1
2
3
4
5
6
7
8
$ git config --global credential.helper store
$ git push http://yourserver.com/repo.git
Username: <type your username>
Password: <type your password>

[several days later]
$ git push http://yourserver.com/repo.git
[your credentials are used automatically]

Friday, 15 September 2017

How to use ssh keys with putty

Overview

PuTTY is a free and open-source terminal emulator, serial console and network file transfer application. It supports several network protocols, including SCP, SSH, Telnet, rlogin, and raw socket connection. It can also connect to a serial port. The name "PuTTY" has no definitive meaning.
PuTTY was originally written for Microsoft Windows, but it has been ported to various other operating systems. Official ports are available for some Unix-like platforms, with work-in-progress ports to Classic Mac OS and macOS, and unofficial ports have been contributed to platforms such as Symbian, Windows Mobile and Windows Phone.

See the following guide for installation guide.

SSH Configuration

Open your putty and click on "Auth" item (1), now click on "Browse..." button (2) and add your private key.
Click on "Loggin" item (3) and put the hostname (4) and saved session name (5). Click on "Save" button (6) to save and finally click on "Load" button to ssh connection.
A good suggestion, in step 4 use the following format:
 <sshuser>@<hostname or ip>
Example:
 myuser@myserver

Internal Links

Thursday, 14 September 2017

java.lang.OutOfMemoryError: GC overhead limit

Java runtime environment contains a built-in Garbage Collection (GC) process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.
Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more detail in the Garbage Collection Handbook, but you can trust the GC to do its job well.

The cause

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC only less than 2% of the heap is recovered.
The java.lang.OutOfMemoryError: GC overhead limit exceeded error is displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

Example

In the following class creates a “GC overhead limit exceeded” error by initializing a Map and adding key-value pairs into the map in an unterminated loop:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
package com.blogspot.informationtechnologyarchive;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

public class ExampleGCOverheadLimit {

 public static void main(String[] args) {
  Map<Integer,String> map = new HashMap<Integer,String>();
     Random r = new Random();
     while (true) {
       map.put(r.nextInt(), "value");
     }
 }

}
As you might guess this cannot end well. And, indeed, when you launch the above program with:
java -Xmx100m -XX:+UseParallelGC ExampleGCOverheadLimit
You soon face the java.lang.OutOfMemoryError: GC overhead limit exceeded message.

Solution

As a solution (or rather to consider it as a workaround), if you just wished to get rid of the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message, adding the following to your startup scripts would achieve just that: 
 -XX:-UseGCOverheadLimit

Wednesday, 13 September 2017

How to enable the copy paste in Ubuntu VM with VirtualBox guest to Windows

Open VirtualBox, select your Ubuntu VM and click on Settings button:
Now, apply the following settings:
Start your Ubuntu VM and via shell install the following virtualbox packages with this command:
 sudo apt-get install virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11
Shut Down the Ubuntu VM, close and reopen VirtualBox, after all start your Ubuntu VM.
DONE.

Internal Links

May be of interest to you:

Tuesday, 12 September 2017

How to deploing jars to your artifactory server

The build phase generates one or more artifacts, maven provides a local repository where to store these artifacts but It is possible to store the artifacts in a remote repository as an Artifactory server.
For deploying jars to your artifactory server you can add the following configuration in your settings.xml (in $HOME/.m2). This configuration stores the credentials for the authentication into artifactory repositories.
<servers>     .......     <server>       <id>central</id>       <username>user</username>       <password>password</password>     </server>     <server>       <id>snapshots</id>       <username>user</username>       <password>password</password>     </server>     ....... <servers>
Now yoiu can add the repository configuration always in settings.xml.
<repositories> ......   <repository>     <snapshots>         <enabled>false</enabled>     </snapshots>     <id>central</id>     <name>libs-releases</name>     <url>http://myartifactory:8080/artifactory/libs-releases</url>   </repository>   <repository>     <snapshots>         <enabled>true</enabled>         <updatePolicy>always</updatePolicy>         <checksumPolicy>fail</checksumPolicy>     </snapshots>     <id>snapshots</id>     <name>libs-snapshots</name>     <url>http://myartifactory:8080/artifactory/libs-snapshots</url>   </repository> ...... <repositories>
You can execute the following command:
$ mvn deploy
If your maven project version has a SNAPSHOT version then the jars will be deployed on "libs-snapshots" repo, else they will be deployed in "libs-releases".
If you want deploy the jars in a repo not configured in settings.xml file, you can use -DaltDeploymentRepository option. Example:
$ mvn deploy -DaltDeploymentRepository="otherRepoReleases::default::https://otherartifactory:8180/artifactory/otherRepoReleases"
Surely the repository requires authentication that you can configure in settings.xml as shown above.

Monday, 11 September 2017

How to use SSH to connect from linux to linux server

SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. Linux OS provides by shell the ssh command.
The following examples are valid also from and to Mac OS X.

Using SSH in shell bash

For example, you want to connect to myremotehost (ip: 205.200.99.33) use the following command:
 ssh myremotehost 
or
 ssh 205.200.99.33
If the connection is started, after you must enter the username. Alternatively you can use the ssh command as following:
 ssh username@myremotehost
The username is the OS user of remote computer.
Instead the password can not be entered in command line but at a later time. An alternative authentication system is through the use of certificates, in detail using a ssh key in your server you do not have to enter the password.
An important use case of the ssh command involves the possibility of launching bash commands remotely:
 ssh username@myremotehost "echo Hello Wolrd!!!"
Such usage is more util for bash scripting but it needs of ssh key to batch mode. 


SSH key


An SSH key will let you automatically log into your server from one particular computer without needing to enter your password. This is convenient for two reasons:
  1. Automation: a bash script can runs ssh commands in batch mode.
  2. Security: each connection is associated to a ssh key, so at specific user.

 How to configure the ssh key

  1. In your server: make the initial ssh connection as root and change to the home directory for the user you are creating the key for, then create the .ssh directory.
     cd /home/<user> && mkdir .ssh
  2. In local computer: generate a ssh key using strong encryption.
     ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa -C "An optional comment about your key"
  3. In local computer:  Check and add the grants in .ssh directory.
     chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
  4. In local computer: Upload your public key to your server in append into authorized_keys file. This file contains the ssh public key list. 
     cat ~/.ssh/id_rsa.pub | ssh <user>@<host> 'cat - >> ~/.ssh/authorized_keys'
  5. In your server: Check and add the grants in .ssh directory.
     chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh/

How to connect

The default directory and name for new keys is ~/.ssh/id_rsa, and this is where SSH will look for your keys. If you use a different key use the following command:
 ssh -i <ssh_key_path>/<public_key> <user>@<host>
Example:
 ssh -i new_path/other_key anuser@myhost

Internal Links

May be of interest to you:

Creating a simple server Git repository

As precondition, it is important clarify the following definition:
A server repository, also called "bare repository", is a Git repository without a working copy.
Git can use four protocols to transport data:
  • Local;
  • Secure Shell (SSH);
  • Git;
  • HTTPS.
For all protocols, we have to create the bare repository by executing these lines on the server's command lines. With the following commands, you can create a directory myproject and initialize an empty Git bare repository. Create the directory and go inside it:
$ mkdir myproject
$ cd myproject 
Initialized empty Git repository in /home/user/myproject:
$ git init --bare

Local protocol

The local protocol is the basic protocol; the remote repository is a local directory. This protocol is used if all members have access to the remote repository. Now, every programmer has to clone it in local:
$ git clone /opt/git/myproject.git
For example one of the programmers, has already written some code lines. He has to initialize a local Git repository inside the directory and set a remote location for the bare repository:
develper@local$ git init
develper@local$ git remote add origin /opt/git/myproject.git
This example will be reported for each protocol. To identify it will be called "example of developer".
The following are the pros of the local protocol:
  • Easy to share with other members.
  • Fast access on the repository.
And the cons are:
  • Hard to set up a shared network.
  • Fast only if the file access is fast.

SSH

Secure Shell (SSH) is the most used protocol, especially if the remote repository is on a remote server. Now, every programmer has to first clone it in local: 
$ git clone ssh://username@server/myproject.git
Using the SSH protocol, programmers have to install their SSH keys on the remote repository in order to push to and pull from it. Otherwise, they have to specify the password on each remote command.
The example of developer in this scenario:
develper@local$ git init
develper@local$ git remote add origin ssh://username@server/myproject.git
The following are the pros of the ssh protocol:
  • Easy to share using a remote server.
  • SSH compresses data while transport, which makes it fast.
And the con is:
  • No anonymous access.

Git

The Git transport is similar to SSH, but without any security. You can't push data on it by default, but you can activate this feature. Like in all cases, the programmer has to clone it in local, as follows:
$ git clone git://username@server/myproject.git
The example of developer in this scenario:
develper@local$ git init 
develper@local$ git remote add origin git://username@server/myproject.git
The following is the pro of the git:
  • Faster than the others.
And the con is:
  • No security because the Git transport is the same as SSH but without the security layer.

HTTPS

The HTTPS protocol is the easiest to set up. Anyone who has access to the web server can clone it. The programmers start to clone it in local:
$ git clone https://server/myproject.git
The example of developer in this scenario:
develper@local$ git init .
develper@local$ git remote add origin http://server/myproject.git
The following is the pro of https:
  • Easy to set up.
And the con is:
  • Very slow data transport.

Internal Links

May be of interest to you:

  1. GIT - getting started

Friday, 8 September 2017

GIT - getting started

Git is a version control system (VCS) for code. It is used to keep track of revisions and allow a development team to work together on a project through branches. This fast guide has the scope of show an overview of git using the command line interface (CLI). 

To begin with a clone a remote repository in your directory:
$ git clone http://<username>@<repository_url>
From your directory you can switch branch with:
$ git checkout <branch_name>
If you want create a new local branch:
$ git checkout -b <branch_name>
To show the current branch:
$ git branch
The command shows the branch list and the current branch is that with *. For the status about the files changed
$ git status
When a file has some changes, you can revert your changes with:
$ git checkout -- <file_with_changes>
or you can add it to commit file list:
$ git add <file_with_changes>
now you can commit the file in your local repository:
$ git commit -m "your comment"
To update the local repository with the remote repository:
$ git pull
For merge from a branch with the current branch:
$ git merge <branch_target>
To update the branch on remote repository
$ git push origin <your_branch>
To create a local tag
$ git tag <tag_name>
To add a local tag in remote repository
$ git push origin <tag_name>

Thursday, 7 September 2017

Microservice Architecture and Service Oriented Architecture (SOA)

Are they the same concept? Let's see the two definitions:

The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data. (cit Martin Fowler [1]).

Service Oriented Architecture (SOA) is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. (Reference Model for Service Oriented Architecture 1.0 [2]).

Comparing the two definitions seems fairly clear that although they have some similar aspects are not the same. A good article, "Microservices is SOA, for those who know what SOA is" [3] of Steve Jones, illustrates a series of points which highlight the differences between the two architectures.

Characteristics of a Microservice Architecture

Let's analyze the characteristics of a microservice architecture.

Componentization via Services

A component is a unit of software that is independently replaceable and upgradeable. Microservice architectures will use libraries, but their primary way of componentizing their own software is by breaking down into services. We define libraries as components that are linked into a program and called using in-memory function calls, while servicesare out-of-process components who communicate with a mechanism such as a web service request, or remote procedure call. One main reason for using services as components is that services are independently deployable. Another consequence of using services as components is a more explicit component interface. 

Organized around Business Capabilities

The microservice approach to division is different, splitting up into services organized around business capability. Such services take a broad-stack implementation of software for that business area, including user-interface, persistant storage, and any external collaborations. Consequently the teams are cross-functional, including the full range of skills required for the development: user-experience, database, and project management.

Products not Projects

On completion the software is handed over to a maintenance organization and the project team that built it is disbanded. Microservice proponents tend to avoid this model, preferring instead the notion that a team should own a product over its full lifetime.

Smart endpoints and dumb pipes

The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible - they own their own domain logic and act more as filters in the classical Unix sense - receiving a request, applying logic as appropriate and producing a response.
The two protocols used most commonly are HTTP request-response with resource API's and lightweight messaging. Microservice teams use the principles and protocols that the world wide web is built on. The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don't do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services. In a monolith, the components are executing in-process and communication between them is via either method invocation or function call. The biggest issue in changing a monolith into microservices lies in changing the communication pattern.

Decentralized Governance

One of the consequences of centralised governance is the tendency to standardise on single technology platforms. Teams building microservices prefer a different approach to standards too. Rather than use a set of defined standards written down somewhere on paper they prefer the idea of producing useful tools that other developers can use to solve similar problems to the ones they are facing.

Decentralized Data Management

Decentralization of data management presents in a number of different ways. At the most abstract level, it means that the conceptual model of the world will differ between systems. This is a common issue when integrating across a large enterprise, the sales view of a customer will differ from the support view. Some things that are called customers in the sales view may not appear at all in the support view. Those that do may have different attributes and (worse) common attributes with subtly different semantics. The monolithic applications prefer a single logical database for persistant data, enterprises often prefer a single database across a range of applications. Instead microservices prefer letting each service manage its own database, either different instances of the same database technology, or entirely different database systems - an approach called Polyglot Persistence.
Monolithic vs Microservices Architecture
Decentralizing responsibility for data across microservices has implications for managing updates.Using transactions like this helps with consistency, but imposes significant temporal coupling, which is problematic across multiple services. Distributed transactions are notoriously difficult to implement and as a consequence microservice architectures emphasize transactionless coordination between services, with explicit recognition that consistency may only be eventual consistency and problems are dealt with by compensating operations.

Infrastructure Automation

Many of the products or systems being build with microservices are being built by teams with extensive experience of Continuous Delivery and it's precursor, Continuous Integration. Teams building software this way make extensive use of infrastructure automation techniques. The following picture illustrates in the build pipeline shown below.
Another area where we see teams using extensive infrastructure automation is when managing microservices in production. In contrast to our assertion above that as long as deployment is boring there isn't that much difference between monoliths and microservices, the operational landscape for each can be strikingly different.

Design for failure

A consequence of using services as components, is that applications need to be designed so that they can tolerate the failure of services. Any service call could fail due to unavailability of the supplier, the client has to respond to this as gracefully as possible. This is a disadvantage compared to a monolithic design as it introduces additional complexity to handle it. The consequence is that microservice teams constantly reflect on how service failures affect the user experience. Since services can fail at any time, it's important to be able to detect the failures quickly and, if possible, automatically restore service. Microservice applications put a lot of emphasis on real-time monitoring of the application, checking both architectural elements and business relevant metrics. Semantic monitoring can provide an early warning system of something going wrong that triggers development teams to follow up and investigate. This is particularly important to a microservices architecture because the microservice preference towards choreography and event collaboration leads to emergent behavior. 

References 

Welcome

Hello everybody, Welcome in my blog called "Information technology archive". Obviously the topics will be related to Informatio...