19 May 2018

feedFedora People

Alvaro Castillo: Mastering en Bash - Buscando archivos y directorios

En busca de Wally

¿A quién no le interesa encontrar a esta buscada ballena en medio de un océano tan grande? Pues a nosotros no la verdad, preferimos buscar otro tipo de cosas como archivos y directorios en nuestro sistema. Para ello haremos uso de los comandos find(1), locate(1), whois(1),whereis(1).

Diferencias

find(1) a diferencia de locate(1) es un comando que busca a tiempo real y tiene muchísimas funcionalidades añadidas como filtrar por nombre, tipo de ejecutable, fecha...

19 May 2018 8:11pm GMT

18 May 2018

feedFedora People

Fedora Magazine: Get started with Apache Cassandra on Fedora

NoSQL databases are every bit as popular today as more conventional, relational ones. One of the most popular NoSQL systems is Apache Cassandra. It's designed to deal with big data, and can be scaled across large numbers of servers. This makes it resilient and highly available.

This package is relatively new on Fedora, since it was introduced on Fedora 26. The following article is a short tutorial to set up Cassandra on Fedora for a development environment. Production deployments should use a different set-up to harden the service.

Install and configure Cassandra

The set of database packages in Fedora's stable repositories are the client tools in the cassandra package. The common library is in the cassandra-java-libs package (required by both client and server). The most important part of the database, the daemon, is available in the cassandra-server package. Some more supporting packages may be listed by running the following command in a terminal.

dnf list cassandra\*

First, install and start the service:

$ sudo dnf install cassandra cassandra-server
$ sudo systemctl start cassandra

To enable the service to automatically start at boot time, run:

$ sudo systemctl enable cassandra

Finally, test the server initialization using the client:

$ cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE k1 WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
cqlsh> USE k1;
cqlsh:k1> CREATE TABLE users (user_name varchar, password varchar, gender varchar, PRIMARY KEY (user_name));
cqlsh:k1> INSERT INTO users (user_name, password, gender) VALUES ('John', 'test123', 'male');
cqlsh:k1> SELECT * from users;

 user_name | gender | password
-----------+--------+----------
      John |   male |  test123

(1 rows)

To configure the server, edit the file /etc/cassandra/cassandra.yaml. For more information about how to change configuration, see the the upstream documentation.

Controlling access with users and passwords

By default, authentication is disabled. To enable it, follow these steps:

  1. By default, the authenticator option is set to AllowAllAuthenticator. Change the authenticator option in the cassandra.yaml file to PasswordAuthenticator:
authenticator: PasswordAuthenticator
  1. Restart the service.
$ sudo systemctl restart cassandra
  1. Start cqlsh using the default superuser name and password:
$ cqlsh -u cassandra -p cassandra
  1. Create a new superuser:
cqlsh> CREATE ROLE <new_super_user> WITH PASSWORD = '<some_secure_password>' 
    AND SUPERUSER = true 
    AND LOGIN = true;
  1. Log in as the newly created superuser:
$ cqlsh -u <new_super_user> -p <some_secure_password>
  1. The superuser cannot be deleted. To neutralize the account, change the password to something long and incomprehensible, and alter the user's status to NOSUPERUSER:
cqlsh> ALTER ROLE cassandra WITH PASSWORD='SomeNonsenseThatNoOneWillThinkOf'
    AND SUPERUSER=false;

Enabling remote access to the server

Edit the /etc/cassandra/cassandra.yaml file, and change the following parameters:

listen_address: external_ip
rpc_address: external_ip
seed_provider/seeds: "<external_ip>"

Then restart the service:

$ sudo systemctl restart cassandra

Other common configuration

There are quite a few more common configuration parameters. For instance, to set the cluster name, which must be consistent for all nodes in the cluster:

cluster_name: 'Test Cluster'

The data_file_directories option sets the directory where the service will write data. Below is the default that is used if unset. If possible, set this to a disk used only for storing Cassandra data.

data_file_directories:
    - /var/lib/cassandra/data

To set the type of disk used to store data (SSD or spinning):

disk_optimization_strategy: ssd|spinning

Running a Cassandra cluster

One of the main features of Cassandra is the ability to run in a multi-node setup. A cluster setup brings the following benefits:

The following sections describe how to setup a simple two-node cluster.

Clearing existing data

First, if the server is running now or has ever run before, you must delete all the existing data (make a backup first). This is because all nodes must have the same cluster name and it's better to choose a different one from the default Test cluster name.

Run the following commands on each node:

$ sudo systemctl stop cassandra
$ sudo rm -rf /var/lib/cassandra/data/system/*

If you deploy a large cluster, you can do this via automation using Ansible.

Configuring the cluster

To setup the cluster, edit the main configuration file /etc/cassandra/cassandra.yaml. Modify these parameters:

Configuration files for a two-node cluster follow.

Node 1:

cluster_name: 'My Cluster'
num_tokens: 256
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds: 10.0.0.1, 10.0.0.2
listen_address: 10.0.0.1
rpc_address: 10.0.0.1
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Node 2:

cluster_name: 'My Cluster'
num_tokens: 256
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds: 10.0.0.1, 10.0.0.2
listen_address: 10.0.0.2
rpc_address: 10.0.0.2
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Starting the cluster

The final step is to start each instance of the cluster. Start the seed instances first, then the remaining nodes.

$ sudo systemctl start cassandra

Checking the cluster status

In conclusion, you can check the cluster status with the nodetool utility:

$ sudo nodetool status

Datacenter: datacenter1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  10.0.0.2  147.48 KB  256          ?       f50799ee-8589-4eb8-a0c8-241cd254e424  rack1
UN  10.0.0.1  139.04 KB  256          ?       54b16af1-ad0a-4288-b34e-cacab39caeec  rack1
 
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

Cassandra in a container

Linux containers are becoming more popular. You can find a Cassandra container image here on DockerHub:

centos/cassandra-3-centos7

It's easy to start a container for this purpose without touching the rest of the system. First, install and run the Docker daemon:

$ sudo dnf install docker
$ sudo systemctl start docker

Next, pull the image:

$ sudo docker pull centos/cassandra-3-centos7

Now prepare a directory for data:

$ sudo mkdir data
$ sudo chown 143:143 data

Finally, start the container with a few arguments. The container uses the prepared directory to store data into and creates a user and database.

$ docker run --name cassandra -d -e CASSANDRA_ADMIN_PASSWORD=secret -p 9042:9042 -v `pwd`/data:/var/opt/rh/sclo-cassandra3/lib/cassandra:Z centos/cassandra-3-centos7

Now you have the service running in a container while storing data into the data directory in the current working directory. If the cqlsh client is not installed on your host system, run the one provided by the image with the following command:

$ docker exec -it cassandra 'bash' -c 'cqlsh '`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' cassandra`' -u admin -p secret'

Conclusion

The Cassandra maintainers in Fedora seek co-maintainers to help keep the package fresh on Fedora. If you'd like to help, simply send them email.


Photo by Glen Jackson on Unsplash.

18 May 2018 12:26pm GMT

Keiran "Affix" Smith: #FaasFriday – Building a Serverless Microblog in Ruby with OpenFaaS – Part 1

Happy #FaaSFriday today we are going to learn how to build our very own serverless microblogging backend using Pure Ruby. Alex Ellis of OpenFaaS believes it would be useful to not rely on any gems that require Native Extensions (MySQL, ActiveRecord, bcrypt, etc…) so thats what I intend to do, This tutorial will use nothing byt pure ruby to keep containers small.

Before you continue reading I highly recommend you checkout OpenFaaS on Twitter and Star the OpenFaas repo on Github

Disclaimer

This microblogging platform as is should not be used in a production environment as password encryption is sub-optimal (i.e its plain SHA2 and not salted) but.. it could be changed easily something more suitable like bcrypt.

We make extensive use of Environment variables however You should use secrets read more about that here.

Now thats out of the way lets get stuck in.

Assumptions

As always in order to follow this tutorial there are pre-requisites.

What is OpenFaas?

With OpenFaaS you can package anything as a serverless function - from Node.js to Golang to CSharp, even binaries like ffmpeg or ImageMagick.

Architecture

Serverless Microblog ArchitectureOur serverless microblogging platform consists of 5 Functions

Register

Our Register function will accept a JSON String containing a Username, Password, First Name, Last Name and E-Mail address. Once submitted we will check the Database for an existing Username and E-Mail if none is found we will add a new record to the database.

Login

Our Login function will accept a JSON string containing a username and password, We will then return a signed JWT containing the username, first name, last name and e-mail.

Add Post

Our Add Post function will accept a JSON string containing the token(from login) and body of the post.

Before we add the post to the database we will call the Validate JWT function to validate the signature of the JWT Token. We will take the User ID from the token.

List Posts

When we call the list posts function we can call it with a JSON string to filter the posts by user and Paginate. By Default we will return the last 100 posts. To paginate we will add an offset parameter to our JSON.

If we don't send a filter we will return the latest 100 posts in our database.

Registering Users

In order to add posts we need to have users, Lets get started by registering our first user.

Run the following SQL Query to create the users table.

CREATE TABLE users
(
 id int PRIMARY NOT NULL AUTO_INCREMENT,
 username TEXT KEY NOT NULL
 password TEXT NOT NULL,
 email TEXT KEY NOT NULL,
 first_name text NOT NULL,
 last_name int NOT NULL
);
CREATE UNIQUE INDEX users_id_uindex ON users (id);

Now we have our table lets create our first function.

$ faas-cli new faas-ruby-register --lang ruby

this will download the templates and create our first function. Open the Gemfile and add the 'ruby-mysql' gem

source 'https://rubygems.org'

gem 'ruby-mysql'

thats the only Gem we need for this function. ruby-mysql is a pure ruby implementation of a mySQL connector and suits the needs of our project. We will be using this gem extensively.

Now open up handler.rb and we add the following code.

require 'mysql'
require 'json'
require 'digest/sha2'

class Handler
    def run(req)
      @my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      json = JSON.parse(req)
      if !user_exists(json["username"], json["email"])
        password = Digest::SHA2::new << json['password']
        stmt = @my.prepare('insert into users (username, password, email, first_name, last_name) values (?, ?, ?, ?, ?)')
        stmt.execute json["username"], password.to_s, json["email"], json["first_name"], json["last_name"]
        return "{'username': #{json["username"]}, 'status': 'created'}"
      else
        return "{'error': 'Username or E-Mail already in use'}"
      end
    end

    def user_exists(username, email)
      @my.query("SELECT username FROM users WHERE username = '#{Mysql.escape_string(username)}' OR email = '#{Mysql.escape_string(email)}'").each do |username|
        return true
      end
      return false
    end
end

And thats our function, Lets have a look at some of this function in detail. In our runner I declared a class variable @my with our mySQL connection. I then parsed the JSON we passed to the function. I used a user_exists method to determine if a user exists in our database, if not I moved on to create a new user. I hashed the password with SHA2 and used a prepared statement to insert our new user.

Open your faas-ruby-register.yml and make it match the following, Please ensure you use your own image instead of mine if you are making modifications.

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  faas-ruby-register:
    lang: ruby
    handler: ./faas-ruby-register
    image: affixxx/faas-ruby-register
    environment:
      mysql_host: <HOST>
      mysql_user: <USERNAME>
      mysql_password: <PASSWORD>      
      mysql_database: <DB>

Now lets deploy and test the function!

$ faas-cli build -f faas-ruby-register.yml # Build our function Container
$ faas-cli push -f faas-ruby-register.yml # Push our container to dockerhub
$ faas-cli deploy -f faas-ruby-register.yml # deploy the function
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'username': username, 'status': 'created'}
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'error': 'Username or E-Mail already in use'}

awesome register works. Lets move on.

Logging In

Now we have a user in our database we need to log them in, This will require generating a JWT token so we need to generate an RSA Keypair. On a unix based system run the following commands.

$ openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048
$ openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we have our keypair we need a base64 representation of the text, using a method of your choice get a base64 representation.

Lets generate our new function with faas-cli

$ faas-cli new faas-ruby-login --lang ruby

Before we start working on the function lets get the faas-ruby-login.yml file ready

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  faas-ruby-login:
    lang: ruby
    handler: ./faas-ruby-login
    image: affixxx/faas-ruby-login
    environment:
      public_key: <BASE64_RSA_PUBLIC_KEY>
      private_key: <BASE64_RSA_PRIVATE_KEY>
      mysql_host: mysql
      mysql_user: root
      mysql_password:
      mysql_database: users

Now we can write the function. This one is a little more complex than registration. So open the faas-ruby-login/handler.rb file and replace it with the following.

require 'mysql'
require 'jwt'
require 'digest/sha2'

class Handler
    def run(req)
      my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      token = nil
      req = JSON.parse(req)
      username = Mysql.escape_string(req['username'])
      my.query("SELECT email, password, username, first_name, last_name FROM users WHERE username = '#{username}'").each do |email, password, username, first_name, last_name|
        digest = Digest::SHA2.new << req['password']
        if digest.to_s == password
          user = {
            email: email,
            first_name: first_name,
            last_name: last_name,
            username: username
          }

          token = generate_jwt(user)
          return "{'username': '#{username}', 'token': '#{token}'}"
        else
          return "{'error': 'Invalid username/password'}"
        end
      end
      return "{'error': 'Invalid username/password'}"
    end

    def generate_jwt(user)
      payload = {
        nbf: Time.now.to_i - 10,
        iat: Time.now.to_i - 10,
        exp: Time.now.to_i + ((60) * 60) * 4,
        user: user
      }

      priv_key = OpenSSL::PKey::RSA.new(Base64.decode64(ENV['private_key']))

      JWT.encode(payload, priv_key, 'RS256')
    end
end

The biggest difference between this function and our register function is of course the JWT generator. JWT (JSON Web Tokens) are an open, industry standard RFC 7519 method for representing claims securely between two parties.

Our payload obviously contains our user hash after we fetched this from the database, However there are some other fields required.

nbf: Not Before, Our token is not valid before this timestamp. We subtract 10 seconds from the timestamps to account for clock drift.
iat: Issued at, This is the time we issued the token, Again we set this to 10 seconds in the past to account for time drift.
exp: Expiry, this is when our token will no longer be, we have it set to 14400 seconds (4 hours).

Lets test our login function!

$ echo '{"username": "affixx", "password": "TestPassword"}' | faas invoke faas-ruby-login
{'username': 'affixx', 'token': 'eyJhbGciOiJSUzI1NiJ9.eyJuYmYiOjE1MjY1OTMyMTgsImlhdCI6MTUyNjU5MzIxOCwiZXhwIjoxNTI2NjA3NjI4LCJ1c2VyIjp7ImVtYWlsIjoiY2FvbnRhY3RAa2VpcmFuLnNjb3QiLCJmaXJzdF9uYW1lIjoia2VpcmFuIiwibGFzdF9uYW1lIjoic21pdGgiLCJ1c2VybmFtZSI6ImFmZml4eCJ9fQ.qchkmOk8dsrw7SL6Rhi0nHyIlaHX4pzUNXXAQMEOb6IU0n1uT9AJEhFVptZ7tueriaTauY1zmYjKm79pd_UfekVICU4EMbGKt8bQaWrmlqpSel88PyQwolI_bYZqybW2TwWYsdwHcGgGgfb8A8ssk9y6YhktviKdofQYPUmLmaB5uljFHkMvNIg-ByJQpTYmCnMfAC-JF6mOsh65dKCP3qz78HiSX3gHODG1Gk1OJbePVpyDNmw7pGrO97c7kUgTWs5wVmD7Kgs697tAkPz65pFDavwZHSvdzpPEZ47Bh8NCGfWe73KYpceCjmOZK6tuawIx0MM4YP0XWke7kOtKkg'}

Success! lets check the token on JWT.io

Success we have a valid JWT.

What happens with an invalid username/password

$ echo '{"username": "affixx", "password": "WrongPassword"}' | faas invoke faas-ruby-login
{'error': 'Invalid username/password'}

Exactly as expected!

Thats all folks!

Thanks for reading part 1 of this tutorial, Check back next #FaaSFriday when we will work on posting!

As always the code for this tutorial is available on github, there are also extra tools available!

The post #FaasFriday - Building a Serverless Microblog in Ruby with OpenFaaS - Part 1 appeared first on Keiran.SCOT.

18 May 2018 8:00am GMT