RVM Installation issue : Public Key download issue

You might have gone through the RVM installation and tried this command

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

This command actually downloads the verified public key and verifies the integrity of the installer script file.

Problem:

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3

gpg: requesting key D39DC0E3 from hkp server keys.gnupg.net
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error

 

Solution:

Despite of that error, you execute the following command

$ \curl -sSL https://get.rvm.io | bash -s stable --ruby

# Output 

Downloading https://github.com/rvm/rvm/archive/1.27.0.tar.gz
Downloading https://github.com/rvm/rvm/releases/download/1.27.0/1.27.0.tar.gz.asc
gpg: Signature made मंगलबार 29 मार्च 2016 using RSA key ID BF04FF17
gpg: Can't check signature: No public key
Warning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found.
Assuming you trust Michal Papis import the mpapis public key (downloading the signatures).

GPG signature verification failed for '/home/john/.rvm/archives/rvm-1.27.0.tgz' - 'https://github.com/rvm/rvm/releases/download/1.27.0/1.27.0.tar.gz.asc'!
try downloading the signatures:

gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

command curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -

the key can be compared with:

https://rvm.io/mpapis.asc
 https://keybase.io/mpapis

The solution is there in the error message. Try

$ gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

If it fails, try this

$ curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -

And try this again

$ \curl -sSL https://get.rvm.io | bash -s stable --ruby

 

You are done,

Thanks for visiting

Advertisements

Linux : Unix : Update $PATH

If you want to put any diectory into loadpath or to your PATH environment variable. You can

  • simple go to that directory
    • $ cd path/to/that/dir
    • $ PATH=$PATH:$(pwd)
    • Where
      • PATH= will set the $PATH var
      • $(pwd) is function call; will call the unix command pwd which will give you the path to the current  working directory
      • Overall; with append content to $PATH var with : conjunction or separator; which will separate two paths

Note: Open (PG)PostGreSQL in command line and do some stuffs

To access postgres console:

$ sudo -u postgres -i

postgres@host:~$ psql

or

$ sudo -u postgres psql

[sudo] password for john: 
psql (9.3.12)
Type "help" for help.

postgres=# 

 

# For help
postgres=# help
 You are using psql, the command-line interface to PostgreSQL.
Type: \copyright for distribution terms
 \h for help with SQL commands
 \? for help with psql commands
 \g or terminate with semicolon to execute query
 \q to quit
# For help
postgres=# \h
 Available help:
 ABORT CLUSTER DEALLOCATE END
 ALTER AGGREGATE COMMENT DECLARE EXECUTE
 ALTER COLLATION COMMIT DELETE EXPLAIN

Create database

# database command has to end with semicolon
postgres=# create database july_prod_dump1;
CREATE DATABASE

Alter permission

ALTER USER new_user CREATEDB

 

Unix : SSH server add new user’s ssh keys

You have not chosen to deploy your application to fancy web hosting services like Engineyard, Digitalocean, or Heroku then you might be willing to know how to give other developers collaborating in this project access to the server via SSH. In other words you want to let others to deploy the application via SSH (eg. Capistrano).

Well, its pretty much simpler than you have thought.  Follow the steps

  • Copy the public key of your colleague to clipboard ( Ctrl + C )
  • SSH into the server
  • If you want to use the same username for all developer (say `deployer`)
    • $ cd /home/deployer/.ssh
      $ sudo nano authorized_keys
    • Paste the SSH public key of your colleague at the end of the file
    • Save the file
    • And you are Done!
    • Now your colleague should have access to the server
  • If want a separate username for every individual
    • It would be great if you create separate user-group for deployment purposes like `deployers`
    • create a new user in that group with privileges you like
      • to give sudo access you need to update the sudoers file
      • The configuration file for sudo is /etc/sudoers
    • goto that particular user’s home directory
    • add his ssh public key to the /home/user/.ssh/authorized_keys file
      • the .ssh folder might not already be available, you can create though
      • $ mkdir /home/user/.ssh
      • copy the content using `nano or echo`  into the file
    • Now you are Done!

 

SSL Certification and Linux / Nginx

Create the SSL Certificate

We can start off by creating a directory that will be used to hold all of our SSL information. We should create this under the Nginx configuration directory:

sudo mkdir /etc/nginx/ssl

Now that we have a location to place our files, we can create the SSL key and certificate files in one motion by typing:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

You will be asked a series of questions. Before we go over that, let’s take a look at what is happening in the command we are issuing:

  • openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
  • req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The “X.509” is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
  • -x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
  • -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Nginx to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
  • -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
  • -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
  • -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
  • -out: This tells OpenSSL where to place the certificate that we are creating.

As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.

Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name that you want to be associated with your server. You can enter the public IP address instead if you do not have a domain name.

The entirety of the prompts will look something like this:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:your_domain.com
Email Address []:admin@your_domain.com

Both of the files you created will be placed in the /etc/nginx/ssl directory.

Wildcard certificate

In computer networking, a wildcard certificate is a public key certificate which can be used with multiple subdomains of a domain. The principal use is for securing web sites with HTTPS, but there are also applications in many other fields.

Common Name (e.g. server FQDN or YOUR name) []:*.your_domain.com
# This make the certificate also applicable to sub-domains.

Step Two — Configure Nginx to Use SSL

We have created our key and certificate files under the Nginx configuration directory. Now we just need to modify our Nginx configuration to take advantage of these by adjusting our server block files. You can learn more about Nginx server blocks in this article.

Nginx versions 0.7.14 and above (Ubuntu 14.04 ships with version 1.4.6) can enable SSL within the same server block as regular HTTP traffic. This allows us to configure access to the same site in a much more succinct manner.

Your server block may look something like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name your_domain.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

The only thing we would need to do to get SSL working on this same server block, while still allowing regular HTTP connections, is add a these lines:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name your_domain.com;
        ssl_certificate /etc/nginx/ssl/nginx.crt;
        ssl_certificate_key /etc/nginx/ssl/nginx.key;

        location / {
                try_files $uri $uri/ =404;
        }
}

When you are finished, save and close the file.

Now, all you have to do is restart Nginx to use your new settings:

sudo service nginx restart

This should reload your site configuration, now allowing it to respond to both HTTP and HTTPS (SSL) requests.

Step Three — Test your Setup

Your site should now have SSL functionality, but we should test it to make sure.

First, let’s test to make sure we can still access the site with using normal HTTP. In your web browser, go to your server’s domain name or IP address:

http://server_domain_or_IP

 

Sources

https://www.digitalocean.com/community/tutorials/how-to-create-an-ssl-certificate-on-nginx-for-ubuntu-14-04 for references

https://en.wikipedia.org/wiki/Wildcard_certificate

https://support.comodo.com/index.php?/Knowledgebase/Article/View/1/38/csr-generation-using-openssl-apache-wmod_ssl-nginx-os-x

https://sg.godaddy.com/help/what-is-a-wildcard-ssl-certificate-567

Some useful monit commands and configs

Config for Sidekiq

# Monit configuration for Sidekiq : myAPP

check process sidekiq_thepact_qa0
 with pidfile "/home/deployer/www/qa/shared/tmp/pids/sidekiq.pid"
 start program = "/bin/su - deployer -c 'cd /home/deployer/www/qa/current && /usr/local/rvm/bin/rvm default do bundle exec sidekiq --config /home/deployer/www/qa/current/config/sidekiq.yml --index 0 -e qa -d'" with timeout 30 seconds
stop program = "/bin/su - deployer -c 'cd /home/deployer/www/qa/current && /usr/local/rvm/bin/rvm default do bundle exec sidekiqctl stop /home/deployer/www/qa/shared/tmp/pids/sidekiq.pid'" with timeout 110 seconds
 group myapp-sidekiq-qa

Commands

reload configuration

$ sudo monit reload

start monit

$ sudo service monit start

 

Cron jobs in Rails : Whenever gem or Scheduler in Heroku

To use autotriggered background processes in Ruby On Rails, we normally user gem like ‘Whenever‘. Its very easy to use.

Using single command like `whenever -i` will update your cron tab. To see your current Cron status you can simply use command like `whenever -l` or `crontab -l`.

Cron In Heroku

Continue reading