Monitoring achieved by Dynamic Alert Policies

In the world of MicroServices, new services get deployed every day and it is hard to keep track of these new services and it is hard for a centralized Ops team to add alerts for each and every service. So we depend on Service Owners to add the alerts for their services but it is never a good idea to depend on a human when it can be easily automated. We use NewRelic for monitoring these services. NewRelic provides ways for achieving this partially by Dynamic Targeting.

Dynamic Targeting

Instead of adding policies for each app, we can use the NewRelic tags to create conditions that can trigger alerts.

Screen Shot 2017-04-13 at 10.15.07 AM

Creating a condition specific to labels like Environment, Sub-Environment, Site and public facing etc., helps us to add conditions to new Services dynamically. Any Service with specific labels will have these conditions attached automatically by NewRelic which is a good thing. These tags can be attached to those services using the NewRelic configuration YAML file.

Dynamic Alert Policies

We found the Dynamic Targeting by NewRelic pretty good but it haven’t met all our requirements. Some alerts related to Error rate, APDEX score, Web Response time all these were service/application specific. These are hard to determine and set to a specific value. So we went ahead and used NewRelic API to achieve our requirements. We built our own orchestrator that deploys these services which helps us achieve these things. We used the functionality of NewRelic Events, NewRelic API, SQS and our own service that add these Alert Conditions dynamically for each deployment.

 

 

We built in house orchestrator for all the deployment purposes. This can be used to add alerts post deployment but we want to do things in MicroService world, so we built our own NewRelic MicroService to do this work for us and let our Orchestrator concentrate on it’s main purpose. This NewRelic Service listens to the SQS for deployment events and adds alert policies based on the metadata of the new service from GIT ( SVN to store all metadata of the applications such as AWS app / DataCenter app, ports, UI/API etc., APDEX score, Expected Response Time etc.,).  These are saved in github, so we can track these changes and any changes done in the github can trigger an event to dynamically trigger the NewRelic Service to modify the values based on the updated metadata in git. This NewRelic Service is also used to scan or clean up non-reporting AWS Servers/ APM’s which are shown as hidden in NewRelic and some other purposes. We used RxJava to build this Service. This helped us to achieve fully automated alert policies without any manual intervention. We also send email notifications if we see any service that is not attached to a policy. There is not direct API to determine this, so we used our new scanner service which I will discuss in our coming posts.

 

 

Spring Boot, Executable Jar, init.d, profiles

Undeniably, spring’s profiles are pretty useful. While it can be used for many different things we use it for configuring environment specific variables. We use yaml file for the properties and let’s say our application.yml is as below

 

url:

http://localhost

spring:
profiles: dev

url:

http://some.server.com

When in local we simply run

mvn spring-boot:run -Drun.jvmArguments=”-Dspring.profiles.active=dev”

or

./application.jar –spring.profiles.active=dev

And I have dev profile running locally, meaning the url property value, when accessed in the spring application, will be http://some.server.com and not http://localhost

But how do we tell which profile to choose when we want to run spring application as an init.d service using the spring boot’s new cool fully executable jar feature?

While using init.d script, server would be started as below (if you have linked it properly)

etc/init.d/application start

But sadly no arguments are passed in this case, which can specify the profile

That’s when RUN_ARGS and JAVA_OPTS environment variables come into picture which are read by the spring boot script before running the jar. In case of profiles, we should be using the JAVA_OPTS environment variable as below (RUN_ARGS is for application specific arguments that you might need to read in the codebase)

JAVA_OPTS=”-Dspring.profiles.active=dev”

 etc/init.d/application start

More details,

https://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html

 

 

 

Spring Boot’s ‘fully executable’ jars

While I was pondering over how to run my new spring boot application as service, Spring-Boot answered it with a ‘fully executable’ jar. Means no more supervisor or custom scripts and no more java -jar myApp.jar. (assuming jar name is myApp.jar). Running it would just be (from its location)

$ myApp.jar

This is how my build section of POM looks


<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>1.3.0.M5</version>
<configuration>
<executable>true</executable>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

More details here

http://docs.spring.io/spring-boot/docs/1.3.x-SNAPSHOT/reference/html/deployment-install.html

NGINX 403 Forbidden, Amazon EC2

Recently I decided to host my own website and here are the issues I faced and how I resolved them

1. Amazon EC2 instance was created and NGINX was installed but nothing comes up when Public IP address was hit

FIX:

Security groups > View Rules

Check if inbound TCP connections are allowed. A very open policy of allowing all ports and all ip addresses would look like

0-65535 tcp 0.0.0.0/0

2. NGINX default web page shows up but not the actualy HTML page. Error being 403 Forbidden

FIX:

NGINX would need access permissions of read and possibly execute on the files and the directories that contain them.

Do either of the below

a. modify the permissions with a command like below

CHMOD -r 755 <parent_folder>

b. or use a user with sufficient permissions as NGINX user.

  • vi /usr/local/nginx/conf/nginx.conf
  • Modify first line as user  root;

Multiplayer HTML5 Snake Game

Wanted to try out WebRTC and create a multiplayer snake game. Worked for a day and created a simple game here https://github.com/iamharish/snakeM/

For now multiplayer is achieved locally with players choosing either arrow keys and WASD keys for controlling the two snakes. But eventually, I would like to use WebRTC, so that each player can play from different machines.

SQL Query Builder for Node – Knex.js and Transactions using Knex

Offlate, I have been falling in love day-by-day with node.js and had a chance to look at Knex – a quick and easy to use query builder for node. You might want to look at Knex as well if you are using node. Its outrightly simple and easy to use. Especially the inserts and transactions are well implemented.

Doing a single row insert or multi row insert is the same.

knex('books').insert({title: 'Slaughterhouse Five'})
knex('books').insert([{title: 'Slaughterhouse Five'},{title: 'Five point someone'}])

You can extend functionality by using chaining as below

knex('books')
  .where('published_date', '<', 2000)
  .update({
    status: 'archived'
  })
knex('accounts')
  .where('activated', false)
  .del()

Ok I know all these examples are in the knex js website. But below example about implementing transactions as atomic operations in Knex is slightly different from the examples shown in the knex site. Basically there’s two way of achieving transactions using Knex.

1. Transactions using only

knex.transaction

2. Transactions using transacting

Below is a combination of both and particularly useful in case of delete sql script. In this example, we are updating description of a book, deleting a book with a particular id and then inserting a new book entry – all in a single transaction.

knex.transaction(function(trx) {
    return trx
        .update({description:description})
        .into('books')
        .then(function() {
            if(streamEvents){
                return knex('books').transacting(trx).del().where({
                    id: id
                }).then(function(){
                    return trx.insert(books).into('books');
                })
            }
        });
})
    .then(function() {
        // it worked
    })
    .catch(function(error) {
        // it failed
    });

Identical machine images for multiple platforms using Packer

Last time when I wrote an article about Vagrant, I used the Vagrant boxes that are available online. Recently when I was working on automating some at office I had the need of creating my own Vagrant box, so I started searching around and read few articles by Fabio Rapposelli. I was very excited reading his articles on how to build a box, so I tried creating my own box to see how Packer actually works. Packer is very useful to create boxes for multiple platforms such as Virtualbox, VMware, Amazon EC2, Openstack etc. My requirement was to create a Vagrant box for Virtualbox and VMware fusion without learning much about their API’s. Packer solved that problem for me pretty easily. Small changes in my JSON file which is the input for Packer helped me create Vagrant boxes for both of them. The command was pretty simple

packer build mybox.json

The documentation about Packer is very simple. You can find it at http://www.packer.io/docs

The advantage of Packer is that it’s easy to configure post processors once the VM is up and running. Post processors like Puppet, Chef, Ansible etc. You can find about them in the above documentation link as well.

I am attaching the links of my JSON  and script files here that I wrote to do this.

https://github.com/hemanthgk10/Packer

Thanks Fabio Rapposelli for all the help and doubts. Hopefully I will write my next blog on how to package that into .box format useful for vCloud Director. Hope fully I can achieve packaging VC and ESX box to use with Vagrant.

 

Which web framework is most wanted in the job market?

Answer by Harish Gokavarapu:

Hello there, thanks for asking me. Assuming the question is about Web Application Frameworks and trying to be as realistic as possible to answer this very relative question, below's my answer (of-course with some meta data from the neurons of my brain's right hippocampus 🙂
I broadly categorize tech companies into two worlds:
Technology-Enabled Companies – where technology is only one of the Enabler
Technology-Driven companies – where technology is the Driver

And job needs and recruitment are completely different in both the worlds.

Spring MVC, ASP .net are still the moghuls in the Tech-Enabled companies while Ruby on Rails and Struts+J2EE are also big players. Sadly most people in this world are choosy based on the frameworks one know's.

For Tech-Driven companies job market is quite abuzz with many and  best part is, new frameworks are coming in and novice players keep replacing the established guys. For now I hear about these a lot though
Laravel for PHP, Django ad Zope 2 for Python, Play and Lift for Scala, Play, Grails, Vertx (specifically when application is event driven) for Java, Express.js for Node.js, Rails for Ruby. And as many would say, its language rather than framework that people are in lookout for. You know a language and one or more framework/s, its good enough and your are in for the interview in this world. But it needs a lot more to sail through the interview process though.

Some helpful links that show trends and benchmarks of various frameworks
Framework technologies Web Usage Statistics
TechEmpower Web Framework Performance Comparison
The 2014 Decision Maker's Guide to Java Web Frameworks

Which web framework is most wanted in the job market?

What is the meaning of a ‘valid Javascript origin’?

Answer by Harish Gokavarapu:

Not sure if you are asking about valid CORS origin header. If so…
When making an XMLHttpRequest with client and the server hosted on different domains, the request will be honored only when the host of the origin of request, i.e. the client host, is either allowed explicitly or through credentials, on the server end. You can find more details in the link below
Using CORS – HTML5 Rocks

What is the meaning of a 'valid Javascript origin'?

How do you compare Play (Java) vs Spring in terms of (Startup/Easier to learn /is the Future/Job/better workflow/Features)?

Answer by Harish Gokavarapu:

Spring or Play – To answer this question we need to first look into why these frameworks came into existence.

Spring – the whole idea behind Spring framework was to keep code simple and without any cross cutting concerns – be it shedding EJB paradigm or Inversion Of Control or Aspect Oriented Programming or even the Convention-over-Configuration philosophy

Play Framework – the core motivation of Play Framework is to optimize be it productivity of developers, or even resources (by being asynchronous). Keeping it simple was more like an outcome of it (in-fact its more simpler than even Spring Framework). But how it does is more inspired from other popular frameworks in languages like Scala, Python and Ruby.

The similarities between Spring and Play end at the Convention-over-Configuration philosophy. While Spring is a more mature framework, with proven capabilities of handling huge loads, and can fit into any layer of an enterprise application, Play Framework isn't like that. It is a reactive web application that has imbibed 'asynchronous approach' to its core. And if a developer doesn't do that, screwing up big is quite easy. The whole mindset of approaching a problem has to be different. While the prospects look promising, adoption of play hasn't been like a wildfire though. May be due to ever emerging Asynchronous frameworks in Java as well as other un-thought-of server languages like Java Script (yes, NodeJS).

While the asynchronous reactive computing would eventually be ubiquitous, whether its Play or something else that takes the lead is still an open question.

I guess I have answered all the questions within the wide paras above but let me just place everything in one place and be more specific

1. Startup/Easier to Learn – Play Rules, its just so easy to start with play. Somehow I'm not a big fan of XML configurations and thankfully Play has none. hit 'Play Run' and there you go….App is up and running. A minor learning curve would be needed for the Scala based templeting engine though. But its quite straight forward.

2. Future – Spring has been and will remain an enterprise application, while Play is for enthusiasts and enterprises. Early adopters of Play has been startups and that will remain so until some start-up using Play manages to handle hell lot of load or a large enterprise takes it up and showcases so. Future is for asynchronous reactive applications and play can give an heads-up of that ideology to the developers.

3. Jobs – Spring has been in use for quite some time and knowing Spring itself could easily fetch a job. But not for Play though – While Play itself wouldn't similarly fetch a job, an approach to problem solving that Play makes you learn could.

4. Better workflow/Features – This one's the hardest of all and very relative. Spring is huge. Comparing every feature of it with a relatively novice Play would not be appropriate. Take it this way – Play has all that you generally need for building a web application, and all of them are very neatly integrated than even in Spring sometimes – be it embedded testing framework that uses JUnits, Selenium, loggers, parsers, scheduler, a very easy-to-use web services framework, or SMTP mailer, actor-model implementation using promises and many more. But if you have any specific or more powerful needs you might have to bring in your own libraries.

How do you compare Play (Java) vs Spring in terms of (Startup/Easier to learn /is the Future/Job/better workflow/Features)?