Showing posts with label Cloud Best Practices. Show all posts
Showing posts with label Cloud Best Practices. Show all posts

Saturday, 16 September 2017

EC2 Ssh Connection Refused

When ssh: connect to host ip_address port 22 Connection refused




Unable to access server???
Exactly when you see the error - “ssh: connect to host ip_address port 22: Connection refused” while connecting your AWS EC2 Instance. In order to find solution of the problem, you will go to AWS forum and other channels where you need to answers several questions first. But it's very difficult to find the actual problem.
In order to get clues what the problem is, we should provide as many details as possible about what we have tried and the results we are getting. Because there are hundreds of reason why a server or service might not be accessible, also connectivity is one of the toughest issue to diagnose, especially when you are hosting something critical on your box.
I've seen several topics on this problem, but none offers a solution to it.  I was not aware for what should I look at first. So I walk through from the very basics and investigated the following thing
Use of verbose while ssh
    $ ssh -vvv user@x.x.x.x
This didn’t help me as I haven't found any meaningful information except connection refused.
  • After that I looked for my security groups, well they haven’t provide me any  hint for further steps.
  • Then I tried telnet at port 22 from my public and private network which was again a hard luck for me.
    $ telnet X.X.X.X 22
  • Tried creating AMI and building new instance of it.
  • I've mounted the EBS of a broken instance on a running instance, look for the file configuration of my ssh.
           $ cat /etc/ssh/sshd_config
          and compare that with running instance.
  • Also checked for the entries in /etc/fstab, but entries were all perfect as per knowledge.
  • Tried starting the instance from the broken instance, but again the same error occured on the screen.
Coming to AWS UI console :-
  • Further moved over the AWS UI, under Action I found option to put user data
action.png

So below entry were made
#cloud-config:
snappy:
ssh_enabled: True


  • I had gone through different option in UI , just went through the system logs
    action-4.png

          And found that the issue is with swap, which is showing error while mounting.
  • So I stopped the broken instance and mount the broken ebs volume to the running one and commented the  swap entry from /etc/fstab
fstab2.png
  • Finally I found that my instance is up and running, again I looked for the system logs in aws UI, where login was prompt was able to access my instance again.

Conclusion :-
If you come across any such error then follow the AWS console of the machine & look for the issue and get to the core of the problem.

Monday, 27 October 2014

VPC per envrionvment versus Single VPC for all environments


This blog talks about the two possible ways of hosting your infrastructure in Cloud, though it will be more close to hosting on AWS as it is a real life example but this problem can be applied to any cloud infrastructure set-up. I'm just sharing my thoughts and pros & cons of both approaches but I would love to hear from the people reading this blog about their take as well what do they think.


Before jumping right away into the real talk I would like to give a bit of background on how I come up with this blog, I was working with a client in managing his cloud infrastructure where we had 4 environments dev, QA, Pre Production and Production and each environment had close to 20 instances, apart from applications instances there were some admin instances as well such as Icinga for monitoring, logstash for consolidating logs, Graphite Server to view the logs, VPN server to manage access of people.




At this point we got into a discussion that whether the current infrastructure set-up is the right one where we are having a separate VPC per environment or the ideal setup would have been a single VPC and the environments could have been separated by subnet's i.e a pair of subnet(public private) for each environment





Both approaches had some pros & cons associated with them

Single VPC set-up

Pros:

  1. You only have a single VPC to manage
  2. You can consolidate your admin app's such as Icinga, VPN server.

Cons:

  1. As you are separating your environments through subnets you need granular access control at your subnet level i.e instances in staging environment should not be allowed to talk to dev environment instances. Similarly you have to control access of people at granular level as well
  2. Scope of human error is high as all the instances will be on same VPC.

VPC per environment setup

Pros:

  1. You have a clear separation between your environments due to separate VPC's.
  2. You will have finer access control on your environment as the access rules for VPC will effectively be access rules for your environments.
  3. As an admin it gives you a clear picture of your environments and you have an option to clone you complete environment very easily.

Cons:

  1. As mentioned in pros of Single VPC setup you are at some financial loss as you would be duplicating admin application's across environments


In my opinion the decision of choosing a specific set-up largely depends on the scale of your environment if you have a small or even medium sized environment then you can have your infrastructure set-up as "All environments in single VPC", in case of large set-up I strongly believe that VPC per environment set-up is the way to go.

Let me know your thoughts and also the points in favour or against of both of these approaches.