day to day tasks as a devop engineer

Chef
Established complete Chef environment in my organization
Installed & configured Chef Workstation, Chef Server and Bootstrapped Chef nodes.
Having complete hands on experience in working with managed chef server.
Having good knowledge in working process of Knife, chef-client, Ohai tool and Idempotency concepts.
Written so many cookbooks from scratch as well as used many cookbooks from Chef Supermarket.
Managed dependencies by using Berks file.
Having good practical experience with ruby scripting. Used extensively in writing recipes to create chef resources.
Deployed Apache web servers by using community cookbooks.
Well aware with the concepts like Metadata file and Run lists.
Created configuration files by using Chef Attributes and deployed in chef nodes.
Used wrapper cookbooks for calling chef supermarket cookbooks instead of downloading them. 
Well aware of best practices with respect to the usage of chef.
Completely automated the running of chef-client instead of calling every time manually.
Having very good knowledge in dealing with advanced concepts in chef like roles. Created many roles in the process of achieving complete automation.
Well aware of advantages of configuration management process and usage of chef tool.. 

Git
Complete understanding of source code management, version control system, its working process and advantages.
Having good knowledge in Git and its advantages over other SCM tools.
Installed and configured Git and GitHub at organization.
Used Git extensively in each and every project to store all kinds of code.
Having good understanding of git terminology.
Well aware of git stages like work space, stage/index, local repository and central repository.
Complete understanding of snapshots and commits concepts.
Attached tags to refer commits as it is very difficult to remember commits ID's.
Good practice knowledge in using git commands like git pull, git push, git fetch, git clone, git log and other git commands.
Well aware of git concepts like ignoring git files, git stash.
Complete hands on experience on dealing with git branching, merging branches, switching branches and resolve merge conflicts.
Good understanding between git reset and git revert and also cleaning git repository.
Good knowledge in some advanced concepts like git rebase, git bisect, git squash and git cherry pic.
Good hands on experience in working with GitHub and BitBucket.

Docker
Having good practice experience in installation and configuring of docker.
Well aware of the advantages of docker over other virtualization technologies in the market.
Good knowledge in docker concepts like OS level virtualization and layered file system which docker follows.
Good understanding of docker, its components and docker work flow.
Created many docker containers form docker images.
Having good hands on experience in using docker images from docker hub as well as creating our own docker images form docker containers.
Good understanding in working process of docker file and also created docker images from docker files.
Created private docker registry to store docker images so that only project members can able to access to make sure that security is at its peaks.
Created docker volumes to provide high availability to data even if any of the docker container goes down.
Good hands on experience in sharing docker volumes among containers and between container and host.
Used docker port mapping to expose port to outside of docker container to access website which is running inside docker container.
Created own local docker registry server by taking registry image from docker hub.
Good knowledge in pulling and pushing docker images from and to docker hub.
Having very good hands on experience in using all kinds of docker instructions in docker file.
Good knowledge in creating demonized docker containers.
        2/2014 – 9/2015
Ansible
Good hands on experience in installing and configuration of Ansible.
Well aware of advantages of ansible over other configuration tools.
Good knowledge in dealing with inventory file and host patterns in adding hosts to inventory file and calling hosts.
Extensively used Ansible ad-hoc commands instead of using ansible playbooks for each and every small task in ansible.
Good knowledge in using multiple ansible modules as and when required and their usage.
Good knowledge in Idempotency concepts of ansible.
Complete hands on experience in writing YAML script in various playbooks.
Well aware of using various sections in ansible like target section, task section and etc.
Used variable section in many playbooks to take the advantage of variables instead of hard coding.
Used handler section while managing dependencies in ansible playbook.
Extensively used other ansible concepts like loops to deal with multiple tasks simultaneously and conditionals.
Used ansible vault to secure information like passwords and secret key files.
Good complete hands on experience of using Ansible roles by including all sections inside ansible playbook.

Maven
Installed and configured Maven.
Well aware of advantages of using build tool over manual builds.
Good understanding of architecture of maven.
Complete depth understanding of maven build life cycle and maven goals.
Hands on experience in creating maven directory structure and maven local repository to store dependencies.
Knowledge in using maven central repository.
Good understanding of maven's main configuration file POM.XML
Dealt with multi-module projects by using maven.

Jenkins
Installed and configured Jenkins in both windows and Linux machines.
Complete understanding of Jenkins complete work flow as well as its advantages over other CI-CD tools.
Good hands on experience in integrating many tools with Jenkins like git, maven, selenium, junit, tomcat webservers.
Installed and configured Java as java is pre-requisite to install Jenkins.
Installed and configured build tools like Maven and Ant and integrated with Jenkins.
Good practice experience in creating free style projects and maven projects and dependency projects.
Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines.
Installed various Jenkins plug-ins from Jenkins community as Jenkins is all about plug-in and play.
Configured many scheduled projects so that they can run frequently without manual trigger.
Responsible to establish complete pipeline work flow starting from pulling source code from git repository till deploying end product into servers.
Created many link projects and configured upstream and downstream projects.
Customizes Jenkins home page and created my own view as well as nested views.
Created many user accounts in Jenkins and given limited privileges to them to ensure security is there in each and every stage by using Jenkins roles.
Created many slaves to take the work load from Master Jenkins server.
Good hands on experience in deploying end product in tomcat web server and other application servers.

Nagios
Installed and configured oldest and latest monitoring tool called Nagios.
Well aware of complete workflow, architecture and advantages of Nagios monitoring tool.
Used many Nagios plugins to monitoring different services in different hosts.
Good understanding of Nagios dashboard and monitoring things.
Created multiple services groups and multiple services groups.
Well understanding of Nagios directory structure and capable of managing that directory structure.
. 2/2014 – 9/2015
Cloud watch
Extensively used cloud monitoring service called Cloud Watch.
Monitored many things like CPU percentage, RAM space, disk space and many more.
Created many cloud watch alarms so that will be alerted if any unusual thing happens and take necessary action to make sure in reducing downtime.
Created different kinds of wizards like line, stacked and numbered wizards.
Dealt with different kinds of metrics which are been provided by AWS.
Good knowledge in using both default monitoring and detailed monitoring.
Integrated cloud watch with so many other AWS services to make sure that there is high availability always..

Kubernetes
Responsible to install and configure Kubernetes in physical as well as cloud environments.
Well aware of advantages, architecture and complete work flow of Kubernetes.
Good knowledge in each and every component of Kubernetes.
Installed and configured K8S master and K8S nodes and established communication between them.
Good understanding of K8S master node components like kube-api server, kube-scheduler and etcd store which forms control plane.
Knowledge in K8S node components like Kube-proxy, kubelet and container engine.
Good understanding of single container pod as well as multi container pods.
Well aware of pod limitations and how to address those issues by using high level K8S abstractions like Replica sets, Deployments, Volumes and Services.
Good knowledge in achieving auto scaling and auto healing.
Well aware of upgrading versions that we call rolling updates and as well as roll back concepts.
        
                                                                                                     AWS
Having very good hands on experience on EC2 (Elastic compute cloud), ELB (Elastic Load Balancer), Auto Scaling, S3 (Simple storage service), Cloud Front, IAM (Identity and Access Management), VPC (Virtual Private cloud), Glacier, Route-53, SNS (Simple notification service), Cloud formation, Elastic beanstalk, EFS (Elastic File System), Cloud Watch and Trusted Adviser. Well aware of SQS (simple queue service), SES (simple email service), RDS, Dynamo DB, Red shift, Elasticache, Ops work, white papers, Snow ball, AWS CLI and Elastic transcoder.

VPC
Taken lead in migration process of servers and data from on premise data center to AWS cloud.
Responsible for complete administration of Cloud infrastructure in my organization.
Created VPC from the scratch and connected to network by using Internet Gateways, Route tables, and NATs
Created many public and private subnets for proper segregation of webservers and database servers to provide high level security.
Defined IP ranges in VPC to have better control over VPC.
Good Hands on experience in VPC peering to connect multiple VPCs so that all act as one single entity.
Enabled VPC Flow logs for the auditing purpose to track incoming and outbound traffic to and from VPC.
Launched Bastion servers/Jump servers in public subnets to have ssh connection to the servers which are present in Private subnets.
Launched Web servers in Public SN through Auto scaling and connected to load balancer to distribute traffic as well as to provide high availability.
Launched Database servers in Private SN and provided internet through NAT server.
Good understanding in dealing with NACLs (Network Access control Lists) and Security Groups to restrict and allow ports to provide security at subnet level and instance level respectively.
        
EC2
Having very good knowledge wrt EC2s like launching windows and Linux machines and all five types of Elastic Block store (EBS) volumes and their differences.
Launched all three kinds of load balancers and attached to webservers to distribute traffic as well as to check the health of EC2 instances to make sure that they are always up and running.
Extensively used Launch configurations and Auto Scaling to provide high availability to EC2 machines and effectively used scaling policies based on web traffic.
Hands on experience in creating Snapshots to take back up copy of our EBS volumes.
Created AMIs and Volumes and played with them like attaching, detaching, creating own AMIs for replication of same environment in same/different Availability Zones as well as same/different regions.
Well understanding of both system status checks and instance status checks and how to trouble shoot if system or instance fails.
Encrypted volumes to provide security from unauthorized access and misuse of data and provide protection from accidental deletion of machines.

S3
Migrated and stored all kinds of object storage of my organization in to S3 bucket to provide durability and security.
Enabled versioning on some important data to provide security from accidental deletion and to roll back to previous versions.
Good knowledge in dealing with Access Control Lists (ACLs) and Bucket policies to restrict unauthorized access in to our own buckets.
Enabled CRR (Cross Region Replication) to replicate data to other buckets which are present in different regions.
Launched static web sites for testing purpose by using S3 static website hosting option instead of going always with EC2s, Load balancers and Auto Scaling.
Good knowledge in dealing with Transfer acceleration to accelerate transfer speed of data into S3 buckets by using AWS globally distributed edge locations/end points.
Well aware of different storage classes/tiers and effectively used life cycle management policy to transition data to different storage classes automatically after certain period of time that we set.
Good knowledge in CORS to share resources across buckets without actually coping data. 2/2014 – 9/2015
Cloud Front
Good hands on experience in working with Cloud Front to access the webpage without any network latency through AWS globally distributed edge locations/end points.
Well aware in setting TTLs for the time period of object to be stored in edge locations.

IAM
Complete hands on experience in managing IAM service to administer AWS resources effectively
Created many user accounts, placed users into their respective groups and given limited privileges directly to users as well as groups to have better security.
Well aware of all kinds of policies which are provided by AWS and their usage.
Good knowledge in working with IAM roles to have password less access to all AWS resources to provide security.
Used roles mainly to establish password less connection between S3 and EC2 for the data migration to and fro.

Route-53
Good understanding in purchasing domain names form AWS as well as deep knowledge in DNS service.
Created alias record sets to provide alias names for load balancers DNS name.
Configured routing policies to provide high availability at region level and protects our infra from regional failure.
Good hands on experience in working with all kinds of routing policies to route the traffic based on our requirement like Simple, Weighted, Latency, Failover and Geolocation routing policies
Configured health checks to make sure that failover routing policy is working effectively.
        2/2014 – 9/2015
Databases
Well aware of working nature of RDS and establishing secure connection to web servers.
Configured Multi-AZs to provide high availability without any downtime even if our database server fails.
Configured Read Replicas to distribute read intensive load across read replicas to avoid load on primary database server.
Frequently taking backups (both Automated as well as DB snapshots) to have backup copy for standby purposes and also replicating of same database server with same data in other AZs as well as Regions.
Having knowledge in Dynamo DB and its working process.
Started learning Redshift and Elasticache as these are very important AWS services wrt Data ware housing and caching engines respectively.

SNS
Configured SNS notification at AutoScaing as well as Route-53 level to get notifications when server and VPC failures respectively.
Complete hands on experience wrt creating SNS topics, creating subscribers and adding subscriptions.
Integrated SNS with almost each and every AWS service to get notifications over different protocols like email email json and SMS.

Cloud watch
Hands on experience in working with cloud watch to monitor all AWS services to maintain high availability and reduce downtime.
Configured cloud watch alarms to get alert whenever any untoward situation arises which helps in addressing issue immediately.
Aware of both default monitoring and detailed monitoring.
Good knowledge in effective usage of all metrics which are being provided by AWS.
        2/2014 – 9/2015
Other Services
Complete hands on experience in working with EFS to provide shares storage so that each and every member in project will have access to common and centralized storage.
Good experience in configuring cloud formation templates to create AWS infrastructure form JSON/YAML code.
Well aware of concepts like converting infrastructure into code so that it is very easy to test, apply version control system as well as replication the environments.
Good knowledge in handling Elastic Beanstalk to test the code as and when required without worrying about underlying infrastructure
Taken help from trusted adviser to enhance security, cost control and provide better performance.
Personally used Elastic Transcoder to convert one media format to another media format.
Complete understanding of AWS white papers as they are providing many guide lines in terms of security, cost control and how to achieve operational excellence.
Started working on OpsWork to convert infrastructure into code combining with famous DevOps tool called Chef.
Having knowledge in data migration concepts like Snowball, Snowball edge and Snow mobile.
Learned messaging services like SQS (Simple Queue Service) and SES (Simple Email Service)
Used Glacier service to store objects which are not required for immediate retrieval in the process of cost control
Good understanding in working with AWS CLI to create and manage AWS infrastructure.

IAM
Complete hands on experience in managing IAM service to administer AWS resources effectively
Created many user accounts, placed users into their respective groups and given limited privileges to directly to users as well as groups to have better security.
Well aware of all kinds of policies which are provided by AWS and their usage.
Good knowledge in working with IAM roles to have password less access to all AWS resources to provide security.
Used roles mainly to establish password less connection between S3 and EC2 for the data migration to and fro. Launched all three kinds of load balancers and attached to webservers to distribute traffic as well as to check the health of EC2 instances to make sure that they are always up and running.
Extensively used Launch configurations and Auto Scaling to provide high availability to EC2 machines and effectively used scaling policies based on web traffic.
Hands on experience in creating Snapshots to take back up copy of our EBS volumes.
Created AMIs and Volumes and played with them like attaching, detaching, creating own AMIs for replication of same environment in same/different Availability Zones as well as same/different regions.
Well understanding of both system status checks and instance status checks and how to trouble shoot if system or instance fails.
Encrypted volumes to provide security from unauthorized access and misuse of data and provide protection from accidental deletion of machines.



Comments