<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Sushant's blog]]></title><description><![CDATA[Sushant's blog]]></description><link>https://blog.sushant.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 11:36:14 GMT</lastBuildDate><atom:link href="https://blog.sushant.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Getting Started With AWS]]></title><description><![CDATA[What is AWS?
What is Cloud Computing?
Cloud computing is the practice of using computing resources over the internet instead of owning and maintaining physical hardware.
These resources include:

Virtual servers

Storage systems

Databases

Networkin...]]></description><link>https://blog.sushant.dev/getting-started-with-aws</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-aws</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sat, 13 Dec 2025 09:55:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765619688118/e9feeb9b-1e7c-4769-9522-b857e634e557.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-aws">What is AWS?</h1>
<h2 id="heading-what-is-cloud-computing">What is Cloud Computing?</h2>
<p>Cloud computing is the practice of using computing resources over the internet instead of owning and maintaining physical hardware.</p>
<p>These resources include:</p>
<ul>
<li><p>Virtual servers</p>
</li>
<li><p>Storage systems</p>
</li>
<li><p>Databases</p>
</li>
<li><p>Networking</p>
</li>
<li><p>Software and tools</p>
</li>
</ul>
<p>In cloud computing:</p>
<ul>
<li><p>You do not buy physical servers</p>
</li>
<li><p>You do not manage data centers</p>
</li>
<li><p>Resources are available instantly</p>
</li>
<li><p>You pay only for what you use</p>
</li>
</ul>
<p>Cloud computing removes the need for upfront infrastructure investment and allows applications to scale easily based on demand.</p>
<h2 id="heading-what-is-aws-1">What is AWS?</h2>
<p>Amazon Web Services (AWS) is a cloud computing platform provided by Amazon that offers on demand infrastructure and managed services.</p>
<p>AWS provides:</p>
<ul>
<li><p>Computing power</p>
</li>
<li><p>Storage solutions</p>
</li>
<li><p>Database services</p>
</li>
<li><p>Networking capabilities</p>
</li>
<li><p>Security and monitoring tools</p>
</li>
</ul>
<p>AWS allows developers and companies to:</p>
<ul>
<li><p>Deploy applications within minutes</p>
</li>
<li><p>Scale applications automatically</p>
</li>
<li><p>Run workloads globally</p>
</li>
<li><p>Reduce infrastructure and operational costs</p>
</li>
</ul>
<p>AWS is used by startups, enterprises, and global companies to build reliable and scalable applications.</p>
<h2 id="heading-why-aws">Why AWS?</h2>
<h3 id="heading-pay-as-you-go-pricing">Pay As You Go Pricing</h3>
<ul>
<li><p>No upfront hardware cost</p>
</li>
<li><p>No long term contracts</p>
</li>
<li><p>Charges based on actual usage</p>
</li>
<li><p>Ideal for beginners and startups</p>
</li>
<li><p>Free Tier available for learning</p>
</li>
</ul>
<h3 id="heading-scalability-and-elasticity">Scalability and Elasticity</h3>
<ul>
<li><p>Resources can scale up automatically when traffic increases</p>
</li>
<li><p>Resources can scale down when demand reduces</p>
</li>
<li><p>No manual server management required</p>
</li>
<li><p>Helps handle sudden traffic spikes efficiently</p>
</li>
</ul>
<h3 id="heading-global-reach">Global Reach</h3>
<ul>
<li><p>AWS operates data centers in multiple regions worldwide</p>
</li>
<li><p>Applications can be deployed closer to users</p>
</li>
<li><p>Reduces latency and improves performance</p>
</li>
<li><p>Supports global and multi-region applications</p>
</li>
</ul>
<h3 id="heading-reliability-and-security">Reliability and Security</h3>
<ul>
<li><p>High availability using multiple Availability Zones</p>
</li>
<li><p>Built in fault tolerance</p>
</li>
<li><p>Strong security services for access control</p>
</li>
<li><p>Data encryption and monitoring supported</p>
</li>
</ul>
<h2 id="heading-cloud-service-models">Cloud Service Models</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765615271909/1620be36-058a-4a4f-ad92-6f383b3a9c05.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-iaas-infrastructure-as-a-service">IaaS (Infrastructure as a Service)</h3>
<ul>
<li><p>Provides virtual machines and infrastructure</p>
</li>
<li><p>User manages:</p>
<ul>
<li><p>Operating system</p>
</li>
<li><p>Applications</p>
</li>
<li><p>Data</p>
</li>
</ul>
</li>
<li><p>Cloud provider manages:</p>
<ul>
<li><p>Physical servers</p>
</li>
<li><p>Networking</p>
</li>
<li><p>Virtualization<br />  <strong>Examples:</strong> Amazon EC2, Amazon EBS</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-paas-platform-as-a-service">PaaS (Platform as a Service)</h3>
<ul>
<li><p>Provides a ready to use platform for application development</p>
</li>
<li><p>User manages:</p>
<ul>
<li><p>Application code</p>
</li>
<li><p>Data</p>
</li>
</ul>
</li>
<li><p>Cloud provider manages:</p>
<ul>
<li><p>Operating system</p>
</li>
<li><p>Runtime environment</p>
</li>
<li><p>Infrastructure<br />  <strong>Examples:</strong> AWS Elastic Beanstalk, Amazon RDS</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-saas-software-as-a-service">SaaS (Software as a Service)</h3>
<ul>
<li><p>Fully managed software delivered over the internet</p>
</li>
<li><p>No infrastructure or platform management required</p>
</li>
<li><p>Accessible through web browsers or APIs<br />  <strong>Examples:</strong> Gmail, Google Docs, Zoom</p>
</li>
</ul>
<h1 id="heading-aws-global-infrastructure">AWS Global Infrastructure</h1>
<h2 id="heading-what-is-aws-global-infrastructure">What is AWS Global Infrastructure?</h2>
<p>AWS Global Infrastructure is the physical backbone of AWS that allows applications to run securely, reliably, and with low latency across the world.</p>
<p>It is designed to:</p>
<ul>
<li><p>Serve users globally</p>
</li>
<li><p>Provide high availability</p>
</li>
<li><p>Ensure fault tolerance</p>
</li>
<li><p>Reduce application downtime</p>
</li>
</ul>
<h2 id="heading-aws-region">AWS Region</h2>
<p>An AWS Region is a <strong>geographical area</strong> where AWS has multiple data centers.</p>
<p>Key points:</p>
<ul>
<li><p>Each region is completely independent</p>
</li>
<li><p>Regions are located across different countries</p>
</li>
<li><p>Data stays within the selected region unless explicitly transferred</p>
</li>
<li><p>Examples: Asia Pacific (Mumbai), US East (N. Virginia), Europe (Frankfurt)</p>
</li>
</ul>
<p>Why regions matter:</p>
<ul>
<li><p>Compliance and data residency</p>
</li>
<li><p>Lower latency for users</p>
</li>
<li><p>Disaster recovery planning</p>
</li>
</ul>
<h2 id="heading-availability-zones-azs">Availability Zones (AZs)</h2>
<p>An Availability Zone is a <strong>physically separate data center</strong> within a region.</p>
<p>Key points:</p>
<ul>
<li><p>Each region has multiple AZs</p>
</li>
<li><p>AZs are isolated but connected with high speed networks</p>
</li>
<li><p>Designed to prevent failure impact from one AZ to another</p>
</li>
</ul>
<p>Why AZs matter:</p>
<ul>
<li><p>High availability</p>
</li>
<li><p>Fault tolerance</p>
</li>
<li><p>Zero or minimal downtime during failures</p>
</li>
</ul>
<p>Example:</p>
<ul>
<li>Deploying an application across multiple AZs ensures it remains available even if one AZ fails.</li>
</ul>
<h2 id="heading-edge-locations">Edge Locations</h2>
<p>Edge Locations are AWS data centers used to deliver content with <strong>low latency</strong> to users.</p>
<p>Key points:</p>
<ul>
<li><p>Used mainly by services like CloudFront</p>
</li>
<li><p>Content is cached closer to end users</p>
</li>
<li><p>Improves performance for static and dynamic content</p>
</li>
</ul>
<p>Why edge locations matter:</p>
<ul>
<li><p>Faster content delivery</p>
</li>
<li><p>Reduced load on origin servers</p>
</li>
<li><p>Better user experience globally</p>
</li>
</ul>
<h2 id="heading-region-vs-availability-zone-vs-edge-location">Region vs Availability Zone vs Edge Location</h2>
<ul>
<li><p><strong>Region</strong>: Large geographical area containing multiple AZs</p>
</li>
<li><p><strong>Availability Zone</strong>: Isolated data center within a region</p>
</li>
<li><p><strong>Edge Location</strong>: Used for content delivery and caching</p>
</li>
</ul>
<h2 id="heading-why-multi-az-architecture-is-important">Why Multi AZ Architecture is Important</h2>
<ul>
<li><p>Improves application availability</p>
</li>
<li><p>Protects against data center failures</p>
</li>
<li><p>Ensures business continuity</p>
</li>
<li><p>Recommended best practice for production workloads</p>
</li>
</ul>
<h2 id="heading-how-aws-global-infrastructure-helps-applications">How AWS Global Infrastructure Helps Applications</h2>
<ul>
<li><p>Enables global deployment</p>
</li>
<li><p>Supports disaster recovery</p>
</li>
<li><p>Improves performance and reliability</p>
</li>
<li><p>Provides strong fault tolerance</p>
</li>
</ul>
<h1 id="heading-identity-and-access-management-iam-amp-security">Identity and Access Management (IAM) &amp; Security</h1>
<h2 id="heading-what-is-iam">What is IAM?</h2>
<p>AWS Identity and Access Management (IAM) is a service that helps you securely control access to AWS resources. It allows you to define <strong>who can access AWS</strong> and <strong>what actions they are allowed to perform</strong>.</p>
<p>IAM is a <strong>global service</strong>, meaning it applies across all AWS regions and is used as the foundation for securing AWS accounts.</p>
<h2 id="heading-iam-users">IAM Users</h2>
<p>An IAM User represents an individual person or application that interacts with AWS.</p>
<p>Each IAM user:</p>
<ul>
<li><p>Has a unique identity</p>
</li>
<li><p>Can log in to the AWS Management Console</p>
</li>
<li><p>Can access AWS services using CLI or SDK</p>
</li>
<li><p>Gets permissions through IAM policies</p>
</li>
</ul>
<p>IAM users are commonly created for developers, admins, or applications that require long term access.</p>
<h2 id="heading-iam-groups">IAM Groups</h2>
<p>IAM Groups are used to manage permissions for multiple users together. Instead of assigning permissions to each user individually, permissions are assigned to the group.</p>
<p>Key characteristics:</p>
<ul>
<li><p>Groups contain users only</p>
</li>
<li><p>Permissions are applied at the group level</p>
</li>
<li><p>A user can be part of multiple groups</p>
</li>
</ul>
<p>This makes permission management easier and more organized.</p>
<h2 id="heading-iam-roles">IAM Roles</h2>
<p>IAM Roles are designed to provide <strong>temporary permissions</strong> without sharing long-term credentials.</p>
<p>Unlike users, roles:</p>
<ul>
<li><p>Do not have passwords or access keys</p>
</li>
<li><p>Are assumed by AWS services or users</p>
</li>
<li><p>Are commonly used for service to service communication</p>
</li>
</ul>
<p>For example, an EC2 instance can assume a role to access S3 securely.</p>
<h2 id="heading-iam-policies">IAM Policies</h2>
<p>IAM Policies define permissions in AWS. They specify what actions are allowed or denied on which resources.</p>
<p>Policies:</p>
<ul>
<li><p>Are written in JSON format</p>
</li>
<li><p>Contain rules called statements</p>
</li>
<li><p>Can be attached to users, groups, or roles</p>
</li>
</ul>
<p>Each policy statement includes:</p>
<ul>
<li><p>Effect (Allow or Deny)</p>
</li>
<li><p>Actions</p>
</li>
<li><p>Resources</p>
</li>
</ul>
<h2 id="heading-principle-of-least-privilege">Principle of Least Privilege</h2>
<p>The principle of least privilege means giving only the minimum permissions required to perform a task.</p>
<p>By following this principle:</p>
<ul>
<li><p>Security risks are reduced</p>
</li>
<li><p>Accidental changes are prevented</p>
</li>
<li><p>AWS accounts remain more secure</p>
</li>
</ul>
<h2 id="heading-aws-shared-responsibility-model">AWS Shared Responsibility Model</h2>
<p>AWS security follows a shared responsibility model where security is divided between AWS and the user.</p>
<p>AWS is responsible for:</p>
<ul>
<li><p>Physical data centers</p>
</li>
<li><p>Hardware and networking</p>
</li>
<li><p>Infrastructure security</p>
</li>
</ul>
<p>The user is responsible for:</p>
<ul>
<li><p>IAM configuration</p>
</li>
<li><p>Application security</p>
</li>
<li><p>Data protection and encryption</p>
</li>
<li><p>OS and patch management for EC2</p>
</li>
</ul>
<h2 id="heading-iam-security-best-practices">IAM Security Best Practices</h2>
<p>To secure AWS accounts effectively:</p>
<ul>
<li><p>Avoid using the root account for daily tasks</p>
</li>
<li><p>Enable multi factor authentication (MFA)</p>
</li>
<li><p>Use IAM roles instead of access keys where possible</p>
</li>
<li><p>Rotate credentials regularly</p>
</li>
<li><p>Monitor activity using AWS CloudTrail</p>
</li>
</ul>
<h1 id="heading-compute-services">Compute Services</h1>
<h2 id="heading-amazon-ec2-elastic-compute-cloud">Amazon EC2 (Elastic Compute Cloud)</h2>
<p>Amazon EC2 is a service that provides <strong>virtual servers</strong> in the cloud. It allows users to run applications on scalable computing capacity without owning physical hardware.</p>
<p>With EC2, you can choose the operating system, instance type, and storage based on your application needs.</p>
<p>Key features of EC2:</p>
<ul>
<li><p>On demand virtual machines</p>
</li>
<li><p>Full control over OS and software</p>
</li>
<li><p>Multiple instance types for different workloads</p>
</li>
<li><p>Integration with other AWS services</p>
</li>
</ul>
<h2 id="heading-ec2-instance-types">EC2 Instance Types</h2>
<p>EC2 instances are categorized based on workload requirements.</p>
<p>Common categories include:</p>
<ul>
<li><p><strong>General Purpose</strong> – balanced compute, memory, and networking</p>
</li>
<li><p><strong>Compute Optimized</strong> – CPU intensive tasks</p>
</li>
<li><p><strong>Memory Optimized</strong> – memory intensive applications</p>
</li>
<li><p><strong>Storage Optimized</strong> – high disk performance</p>
</li>
</ul>
<p>Choosing the right instance type helps optimize performance and cost.</p>
<h2 id="heading-security-groups">Security Groups</h2>
<p>Security Groups act as a <strong>virtual firewall</strong> for EC2 instances. They control inbound and outbound traffic at the instance level.</p>
<p>Important points:</p>
<ul>
<li><p>Allow rules only (no deny rules)</p>
</li>
<li><p>Stateful in nature</p>
</li>
<li><p>Applied directly to EC2 instances</p>
</li>
<li><p>Commonly used to allow HTTP, HTTPS, and SSH access</p>
</li>
</ul>
<h2 id="heading-key-pairs">Key Pairs</h2>
<p>Key Pairs are used to securely connect to EC2 instances.</p>
<p>Key characteristics:</p>
<ul>
<li><p>Consist of a public key and a private key</p>
</li>
<li><p>Private key is used to log in to the instance</p>
</li>
<li><p>Required for SSH access to Linux instances</p>
</li>
</ul>
<p>Key pairs help prevent unauthorized access to EC2 servers.</p>
<h2 id="heading-auto-scaling">Auto Scaling</h2>
<p>Auto Scaling automatically adjusts the number of EC2 instances based on demand. It ensures applications have the right amount of compute capacity at all times.</p>
<p>Auto Scaling helps with:</p>
<ul>
<li><p>High availability</p>
</li>
<li><p>Cost optimization</p>
</li>
<li><p>Automatic scaling during traffic spikes</p>
</li>
<li><p>Replacing unhealthy instances</p>
</li>
</ul>
<h2 id="heading-elastic-load-balancer-elb">Elastic Load Balancer (ELB)</h2>
<p>Elastic Load Balancer distributes incoming traffic across multiple EC2 instances to improve availability and fault tolerance.</p>
<p>Benefits of using ELB:</p>
<ul>
<li><p>Prevents overloading a single server</p>
</li>
<li><p>Improves application reliability</p>
</li>
<li><p>Works with Auto Scaling</p>
</li>
<li><p>Supports health checks</p>
</li>
</ul>
<h2 id="heading-types-of-load-balancers">Types of Load Balancers</h2>
<p>AWS provides different types of load balancers based on use case:</p>
<ul>
<li><p><strong>Application Load Balancer (ALB)</strong> – Layer 7, used for HTTP/HTTPS</p>
</li>
<li><p><strong>Network Load Balancer (NLB)</strong> – Layer 4, high-performance use cases</p>
</li>
<li><p><strong>Classic Load Balancer</strong> – legacy option</p>
</li>
</ul>
<h2 id="heading-how-these-services-work-together">How These Services Work Together</h2>
<p>In a typical architecture:</p>
<ul>
<li><p>EC2 instances run the application</p>
</li>
<li><p>ELB distributes traffic across instances</p>
</li>
<li><p>Auto Scaling adjusts the number of instances based on demand</p>
</li>
<li><p>Security Groups control access to the instances</p>
</li>
</ul>
<p>This combination ensures scalability, availability, and security.</p>
<h1 id="heading-storage-services">Storage Services</h1>
<h2 id="heading-amazon-s3-simple-storage-service">Amazon S3 (Simple Storage Service)</h2>
<p>Amazon S3 is an <strong>object storage service</strong> designed to store and retrieve any amount of data from anywhere on the internet. It is highly scalable, durable, and commonly used as the default storage service in AWS.</p>
<p>In S3, data is stored as <strong>objects</strong> inside <strong>buckets</strong> rather than traditional folders or disks.</p>
<p>Key characteristics of S3:</p>
<ul>
<li><p>Unlimited storage capacity</p>
</li>
<li><p>Data stored as objects (files + metadata)</p>
</li>
<li><p>Highly durable and available</p>
</li>
<li><p>Accessible via web, CLI, or SDK</p>
</li>
</ul>
<h2 id="heading-s3-buckets-and-objects">S3 Buckets and Objects</h2>
<p>An S3 <strong>bucket</strong> is a container for storing objects.</p>
<p>Important points:</p>
<ul>
<li><p>Bucket names must be globally unique</p>
</li>
<li><p>Buckets are created in a specific region</p>
</li>
<li><p>Objects are the actual files stored in buckets</p>
</li>
</ul>
<p>Each object consists of:</p>
<ul>
<li><p>Object data (file)</p>
</li>
<li><p>Metadata</p>
</li>
<li><p>Unique object key</p>
</li>
</ul>
<h2 id="heading-s3-storage-classes">S3 Storage Classes</h2>
<p>S3 provides different storage classes based on access frequency and cost.</p>
<p>Common storage classes:</p>
<ul>
<li><p><strong>S3 Standard</strong> – frequently accessed data</p>
</li>
<li><p><strong>S3 Intelligent-Tiering</strong> – automatic cost optimization</p>
</li>
<li><p><strong>S3 Standard-IA</strong> – infrequently accessed data</p>
</li>
<li><p><strong>S3 Glacier</strong> – archival and long-term storage</p>
</li>
</ul>
<p>Choosing the right storage class helps reduce storage costs.</p>
<h2 id="heading-s3-security-and-access-control">S3 Security and Access Control</h2>
<p>S3 security is managed using:</p>
<ul>
<li><p>Bucket policies</p>
</li>
<li><p>IAM policies</p>
</li>
<li><p>Public and private access settings</p>
</li>
</ul>
<p>Key points:</p>
<ul>
<li><p>Buckets are private by default</p>
</li>
<li><p>Public access must be explicitly enabled</p>
</li>
<li><p>Supports encryption at rest and in transit</p>
</li>
</ul>
<h2 id="heading-amazon-ebs-elastic-block-store">Amazon EBS (Elastic Block Store)</h2>
<p>Amazon EBS provides <strong>block level storage</strong> that is attached to EC2 instances. It behaves like a physical hard drive for virtual servers.</p>
<p>EBS is mainly used for:</p>
<ul>
<li><p>Operating systems</p>
</li>
<li><p>Databases</p>
</li>
<li><p>Application files that require low latency</p>
</li>
</ul>
<p>Key features of EBS:</p>
<ul>
<li><p>Persistent storage</p>
</li>
<li><p>High performance</p>
</li>
<li><p>Automatically replicated within an Availability Zone</p>
</li>
</ul>
<h2 id="heading-s3-vs-ebs">S3 vs EBS</h2>
<p>S3 and EBS serve different purposes in AWS.</p>
<p>Main differences:</p>
<ul>
<li><p>S3 is object storage, EBS is block storage</p>
</li>
<li><p>S3 is accessed over the internet, EBS is attached to EC2</p>
</li>
<li><p>S3 is highly durable across AZs, EBS is limited to one AZ</p>
</li>
<li><p>S3 is ideal for backups and static content, EBS for OS and databases</p>
</li>
</ul>
<h2 id="heading-amazon-efs-elastic-file-system">Amazon EFS (Elastic File System)</h2>
<p>Amazon EFS is a <strong>managed file storage service</strong> that can be shared across multiple EC2 instances.</p>
<p>EFS provides:</p>
<ul>
<li><p>File-based storage</p>
</li>
<li><p>Automatic scaling</p>
</li>
<li><p>Access from multiple EC2 instances simultaneously</p>
</li>
</ul>
<p>It is commonly used for:</p>
<ul>
<li><p>Shared application files</p>
</li>
<li><p>Content management systems</p>
</li>
<li><p>Microservices requiring shared storage</p>
</li>
</ul>
<h2 id="heading-when-to-use-which-storage">When to Use Which Storage</h2>
<ul>
<li><p>Use <strong>S3</strong> for backups, static files, logs, and media</p>
</li>
<li><p>Use <strong>EBS</strong> for EC2 operating systems and databases</p>
</li>
<li><p>Use <strong>EFS</strong> when multiple EC2 instances need shared access</p>
</li>
</ul>
<h1 id="heading-database-services">Database Services</h1>
<h2 id="heading-amazon-rds-relational-database-service">Amazon RDS (Relational Database Service)</h2>
<p>Amazon RDS is a <strong>managed relational database service</strong> that makes it easy to set up, operate, and scale relational databases in the cloud.</p>
<p>With RDS, AWS handles most of the database administration tasks, allowing developers to focus on application logic instead of database maintenance.</p>
<p>RDS supports popular database engines:</p>
<ul>
<li><p>MySQL</p>
</li>
<li><p>PostgreSQL</p>
</li>
<li><p>MariaDB</p>
</li>
<li><p>Oracle</p>
</li>
<li><p>SQL Server</p>
</li>
</ul>
<h2 id="heading-key-features-of-rds">Key Features of RDS</h2>
<p>RDS provides several built in features that simplify database management:</p>
<ul>
<li><p>Automated backups</p>
</li>
<li><p>Database snapshots</p>
</li>
<li><p>Automatic patching</p>
</li>
<li><p>Monitoring and performance metrics</p>
</li>
<li><p>High availability using Multi-AZ deployment</p>
</li>
</ul>
<p>These features help ensure reliability and data safety.</p>
<h2 id="heading-rds-multi-az-deployment">RDS Multi-AZ Deployment</h2>
<p>Multi-AZ deployment creates a standby replica of the database in another Availability Zone.</p>
<p>Important points:</p>
<ul>
<li><p>Improves availability and fault tolerance</p>
</li>
<li><p>Automatic failover during outages</p>
</li>
<li><p>No manual intervention required</p>
</li>
<li><p>Mainly used for production environments</p>
</li>
</ul>
<h2 id="heading-amazon-dynamodb">Amazon DynamoDB</h2>
<p>Amazon DynamoDB is a <strong>fully managed NoSQL key value database</strong> designed for high performance and massive scalability.</p>
<p>Unlike relational databases, DynamoDB does not use tables with joins. It stores data as items identified by primary keys.</p>
<p>Key characteristics:</p>
<ul>
<li><p>Serverless and fully managed</p>
</li>
<li><p>Single digit millisecond latency</p>
</li>
<li><p>Automatic scaling</p>
</li>
<li><p>Highly available across multiple AZs</p>
</li>
</ul>
<h2 id="heading-dynamodb-data-model">DynamoDB Data Model</h2>
<p>DynamoDB organizes data using:</p>
<ul>
<li><p>Tables</p>
</li>
<li><p>Items (rows)</p>
</li>
<li><p>Attributes (columns)</p>
</li>
</ul>
<p>Each item is uniquely identified using:</p>
<ul>
<li><p>Partition key</p>
</li>
<li><p>Optional sort key</p>
</li>
</ul>
<p>This design allows DynamoDB to scale horizontally with minimal latency.</p>
<h2 id="heading-rds-vs-dynamodb">RDS vs DynamoDB</h2>
<p>RDS and DynamoDB are used for different types of applications.</p>
<p>Main differences:</p>
<ul>
<li><p>RDS is relational, DynamoDB is NoSQL</p>
</li>
<li><p>RDS supports complex queries and joins</p>
</li>
<li><p>DynamoDB is designed for high-scale, low-latency workloads</p>
</li>
<li><p>RDS is ideal for structured data, DynamoDB for flexible schemas</p>
</li>
</ul>
<h2 id="heading-when-to-use-which-database">When to Use Which Database</h2>
<ul>
<li><p>Use <strong>RDS</strong> when you need relational data, transactions, and complex queries</p>
</li>
<li><p>Use <strong>DynamoDB</strong> when you need high scalability and low latency</p>
</li>
<li><p>Choose based on application requirements and access patterns</p>
</li>
</ul>
<h1 id="heading-networking-basics">Networking Basics</h1>
<h2 id="heading-amazon-vpc-virtual-private-cloud">Amazon VPC (Virtual Private Cloud)</h2>
<p>Amazon VPC is a <strong>logically isolated virtual network</strong> where you launch AWS resources like EC2, RDS, and Load Balancers.</p>
<p>A VPC gives you control over:</p>
<ul>
<li><p>IP address range</p>
</li>
<li><p>Subnets</p>
</li>
<li><p>Routing</p>
</li>
<li><p>Network security</p>
</li>
</ul>
<p>It acts like your own private data center inside AWS.</p>
<h2 id="heading-subnets">Subnets</h2>
<p>A subnet is a <strong>range of IP addresses</strong> within a VPC. Subnets allow you to organize and isolate resources.</p>
<p>There are two main types of subnets:</p>
<ul>
<li><p><strong>Public Subnet</strong> – resources have access to the internet</p>
</li>
<li><p><strong>Private Subnet</strong> – resources do not have direct internet access</p>
</li>
</ul>
<p>Best practice:</p>
<ul>
<li><p>Place load balancers in public subnets</p>
</li>
<li><p>Place databases in private subnets</p>
</li>
</ul>
<h2 id="heading-internet-gateway-igw">Internet Gateway (IGW)</h2>
<p>An Internet Gateway allows communication between resources in a VPC and the internet.</p>
<p>Key points:</p>
<ul>
<li><p>Attached to a VPC</p>
</li>
<li><p>Enables inbound and outbound internet traffic</p>
</li>
<li><p>Required for public subnets to access the internet</p>
</li>
</ul>
<p>Without an Internet Gateway, resources remain isolated from the public internet.</p>
<h2 id="heading-route-tables">Route Tables</h2>
<p>Route tables control how network traffic flows within a VPC.</p>
<p>Each route table contains:</p>
<ul>
<li><p>Destination (IP range)</p>
</li>
<li><p>Target (IGW, NAT Gateway, or local)</p>
</li>
</ul>
<p>Important points:</p>
<ul>
<li><p>Every subnet must be associated with a route table</p>
</li>
<li><p>Routes determine whether a subnet is public or private</p>
</li>
</ul>
<h2 id="heading-nat-gateway">NAT Gateway</h2>
<p>A NAT Gateway allows instances in a <strong>private subnet</strong> to access the internet without being exposed to inbound traffic.</p>
<p>Use cases:</p>
<ul>
<li><p>Download updates</p>
</li>
<li><p>Access external APIs</p>
</li>
<li><p>Maintain security for private resources</p>
</li>
</ul>
<p>NAT Gateways are commonly used for backend servers.</p>
<h2 id="heading-security-groups-1">Security Groups</h2>
<p>Security Groups act as <strong>virtual firewalls</strong> at the instance level.</p>
<p>Key characteristics:</p>
<ul>
<li><p>Stateful</p>
</li>
<li><p>Allow rules only</p>
</li>
<li><p>Applied to EC2 and other resources</p>
</li>
<li><p>Control inbound and outbound traffic</p>
</li>
</ul>
<p>Security Groups are the first layer of network security.</p>
<h2 id="heading-network-acls-nacls">Network ACLs (NACLs)</h2>
<p>Network ACLs provide security at the <strong>subnet level</strong>.</p>
<p>Key characteristics:</p>
<ul>
<li><p>Stateless</p>
</li>
<li><p>Support allow and deny rules</p>
</li>
<li><p>Applied to all resources in a subnet</p>
</li>
</ul>
<p>NACLs act as an additional layer of security.</p>
<h2 id="heading-security-groups-vs-network-acls">Security Groups vs Network ACLs</h2>
<p>Security Groups:</p>
<ul>
<li><p>Operate at instance level</p>
</li>
<li><p>Stateful</p>
</li>
<li><p>Only allow rules</p>
</li>
</ul>
<p>Network ACLs:</p>
<ul>
<li><p>Operate at subnet level</p>
</li>
<li><p>Stateless</p>
</li>
<li><p>Allow and deny rules</p>
</li>
</ul>
<h2 id="heading-how-networking-components-work-together">How Networking Components Work Together</h2>
<p>In a typical setup:</p>
<ul>
<li><p>VPC provides the network</p>
</li>
<li><p>Subnets divide the network</p>
</li>
<li><p>Route tables control traffic flow</p>
</li>
<li><p>Internet Gateway enables internet access</p>
</li>
<li><p>Security Groups and NACLs secure resources</p>
</li>
</ul>
<h1 id="heading-monitoring-amp-logging">Monitoring &amp; Logging</h1>
<h2 id="heading-amazon-cloudwatch">Amazon CloudWatch</h2>
<p>Amazon CloudWatch is a monitoring service that helps you <strong>observe and track the performance of AWS resources and applications</strong> in real time.</p>
<p>CloudWatch collects metrics, logs, and events, allowing teams to detect issues early and maintain system health.</p>
<p>Key capabilities of CloudWatch:</p>
<ul>
<li><p>Resource performance monitoring</p>
</li>
<li><p>Application log collection</p>
</li>
<li><p>Custom metrics support</p>
</li>
<li><p>Alarm and notification setup</p>
</li>
</ul>
<h2 id="heading-cloudwatch-metrics">CloudWatch Metrics</h2>
<p>Metrics are <strong>numerical data points</strong> that represent the performance of AWS resources.</p>
<p>Common metrics include:</p>
<ul>
<li><p>CPU utilization</p>
</li>
<li><p>Memory usage</p>
</li>
<li><p>Disk I/O</p>
</li>
<li><p>Network traffic</p>
</li>
</ul>
<p>Metrics help identify performance bottlenecks and resource utilization trends.</p>
<h2 id="heading-cloudwatch-logs">CloudWatch Logs</h2>
<p>CloudWatch Logs store and manage log files generated by AWS services and applications.</p>
<p>Important uses:</p>
<ul>
<li><p>Debugging application issues</p>
</li>
<li><p>Analyzing system behavior</p>
</li>
<li><p>Centralized log management</p>
</li>
</ul>
<p>Logs can be collected from EC2, Lambda, and other AWS services.</p>
<h2 id="heading-cloudwatch-alarms">CloudWatch Alarms</h2>
<p>CloudWatch Alarms monitor metrics and trigger actions when thresholds are crossed.</p>
<p>Alarms can:</p>
<ul>
<li><p>Send notifications via SNS</p>
</li>
<li><p>Trigger Auto Scaling actions</p>
</li>
<li><p>Help respond to incidents quickly</p>
</li>
</ul>
<p>Example:</p>
<ul>
<li>Alert when CPU usage exceeds a defined limit.</li>
</ul>
<h2 id="heading-aws-cloudtrail">AWS CloudTrail</h2>
<p>AWS CloudTrail is a service that <strong>records API calls and account activity</strong> in your AWS account.</p>
<p>CloudTrail helps with:</p>
<ul>
<li><p>Security auditing</p>
</li>
<li><p>Compliance monitoring</p>
</li>
<li><p>Tracking who did what and when</p>
</li>
<li><p>Investigating suspicious activity</p>
</li>
</ul>
<h2 id="heading-cloudtrail-logs">CloudTrail Logs</h2>
<p>CloudTrail logs include:</p>
<ul>
<li><p>Identity of the caller</p>
</li>
<li><p>Time of the API request</p>
</li>
<li><p>Source IP address</p>
</li>
<li><p>Actions performed</p>
</li>
</ul>
<p>These logs are useful for auditing and security analysis.</p>
<h2 id="heading-cloudwatch-vs-cloudtrail">CloudWatch vs CloudTrail</h2>
<p>CloudWatch focuses on:</p>
<ul>
<li><p>Performance monitoring</p>
</li>
<li><p>Resource health</p>
</li>
<li><p>Logs and alarms</p>
</li>
</ul>
<p>CloudTrail focuses on:</p>
<ul>
<li><p>User activity</p>
</li>
<li><p>API call history</p>
</li>
<li><p>Security and compliance</p>
</li>
</ul>
<h2 id="heading-why-monitoring-and-logging-matter">Why Monitoring and Logging Matter</h2>
<p>Monitoring and logging ensure:</p>
<ul>
<li><p>High application availability</p>
</li>
<li><p>Faster issue detection</p>
</li>
<li><p>Better security visibility</p>
</li>
<li><p>Improved system reliability</p>
</li>
</ul>
<h1 id="heading-devops-amp-deployment-basics-cicd">DevOps &amp; Deployment Basics (CI/CD)</h1>
<h2 id="heading-what-is-devops">What is DevOps?</h2>
<p>DevOps is a set of practices that aims to <strong>reduce the gap between development and operations</strong> teams. It focuses on automating software delivery, improving collaboration, and ensuring faster and reliable releases.</p>
<p>DevOps helps teams:</p>
<ul>
<li><p>Release features faster</p>
</li>
<li><p>Reduce deployment failures</p>
</li>
<li><p>Improve system stability</p>
</li>
<li><p>Automate repetitive tasks</p>
</li>
</ul>
<h2 id="heading-what-is-cicd">What is CI/CD?</h2>
<p>CI/CD stands for <strong>Continuous Integration and Continuous Deployment</strong>.</p>
<ul>
<li><p><strong>Continuous Integration (CI)</strong> is the practice of automatically building and testing code whenever changes are pushed.</p>
</li>
<li><p><strong>Continuous Deployment (CD)</strong> is the practice of automatically deploying tested code to production or staging environments.</p>
</li>
</ul>
<p>CI/CD ensures that applications are always in a deployable state.</p>
<h2 id="heading-cicd-in-aws">CI/CD in AWS</h2>
<p>AWS provides managed services to build complete CI/CD pipelines without managing servers.</p>
<p>AWS CI/CD services include:</p>
<ul>
<li><p>CodeCommit</p>
</li>
<li><p>CodeBuild</p>
</li>
<li><p>CodeDeploy</p>
</li>
<li><p>CodePipeline</p>
</li>
</ul>
<p>These services integrate well with each other and other AWS services.</p>
<h2 id="heading-aws-codecommit">AWS CodeCommit</h2>
<p>AWS CodeCommit is a <strong>fully managed source control service</strong> that hosts Git repositories.</p>
<p>Key points:</p>
<ul>
<li><p>Secure and scalable Git repositories</p>
</li>
<li><p>Integrated with IAM for access control</p>
</li>
<li><p>Used to store application source code</p>
</li>
</ul>
<h2 id="heading-aws-codebuild">AWS CodeBuild</h2>
<p>AWS CodeBuild is a <strong>build service</strong> that compiles source code, runs tests, and produces build artifacts.</p>
<p>CodeBuild:</p>
<ul>
<li><p>Automatically scales build resources</p>
</li>
<li><p>Supports multiple programming languages</p>
</li>
<li><p>Eliminates the need to manage build servers</p>
</li>
</ul>
<h2 id="heading-aws-codedeploy">AWS CodeDeploy</h2>
<p>AWS CodeDeploy is used to <strong>automate application deployments</strong> to compute services like EC2 and Lambda.</p>
<p>It helps:</p>
<ul>
<li><p>Reduce deployment downtime</p>
</li>
<li><p>Handle rolling and blue-green deployments</p>
</li>
<li><p>Monitor deployment success or failure</p>
</li>
</ul>
<h2 id="heading-aws-codepipeline">AWS CodePipeline</h2>
<p>AWS CodePipeline is a <strong>CI/CD orchestration service</strong> that automates the entire release process.</p>
<p>Pipeline stages typically include:</p>
<ul>
<li><p>Source</p>
</li>
<li><p>Build</p>
</li>
<li><p>Test</p>
</li>
<li><p>Deploy</p>
</li>
</ul>
<p>CodePipeline connects all CI/CD services into a single automated workflow.</p>
<h2 id="heading-how-cicd-works-in-aws">How CI/CD Works in AWS</h2>
<p>In a typical setup:</p>
<ul>
<li><p>Code is pushed to CodeCommit</p>
</li>
<li><p>CodeBuild builds and tests the application</p>
</li>
<li><p>CodeDeploy deploys the application</p>
</li>
<li><p>CodePipeline manages the entire flow</p>
</li>
</ul>
<p>This automation ensures consistent and reliable deployments.</p>
<h1 id="heading-serverless-basics">Serverless Basics</h1>
<h2 id="heading-what-is-serverless">What is Serverless?</h2>
<p>Serverless is a cloud computing model where developers <strong>do not manage servers</strong>. The cloud provider automatically handles infrastructure provisioning, scaling, and maintenance.</p>
<p>In serverless architecture:</p>
<ul>
<li><p>No server management is required</p>
</li>
<li><p>Applications scale automatically</p>
</li>
<li><p>Billing is based on actual execution time</p>
</li>
</ul>
<h2 id="heading-aws-lambda">AWS Lambda</h2>
<p>AWS Lambda is a <strong>serverless compute service</strong> that runs code in response to events without provisioning or managing servers.</p>
<p>Lambda allows developers to:</p>
<ul>
<li><p>Run backend logic without managing EC2</p>
</li>
<li><p>Execute code only when triggered</p>
</li>
<li><p>Scale automatically based on requests</p>
</li>
</ul>
<p>Key characteristics of Lambda:</p>
<ul>
<li><p>Event-driven execution</p>
</li>
<li><p>Supports multiple programming languages</p>
</li>
<li><p>Short-lived executions</p>
</li>
<li><p>Pay only for execution time</p>
</li>
</ul>
<h2 id="heading-lambda-triggers">Lambda Triggers</h2>
<p>Lambda functions can be triggered by various AWS services.</p>
<p>Common triggers include:</p>
<ul>
<li><p>API Gateway</p>
</li>
<li><p>S3 events</p>
</li>
<li><p>DynamoDB streams</p>
</li>
<li><p>CloudWatch events</p>
</li>
</ul>
<p>This makes Lambda suitable for building event-driven systems.</p>
<h2 id="heading-amazon-api-gateway">Amazon API Gateway</h2>
<p>Amazon API Gateway is a managed service that allows you to <strong>create, publish, and manage APIs</strong>.</p>
<p>API Gateway is commonly used to:</p>
<ul>
<li><p>Expose backend services to frontend applications</p>
</li>
<li><p>Connect Lambda functions to HTTP endpoints</p>
</li>
<li><p>Secure APIs using authentication and throttling</p>
</li>
</ul>
<h2 id="heading-lambda-api-gateway-architecture">Lambda + API Gateway Architecture</h2>
<p>In a typical serverless setup:</p>
<ul>
<li><p>API Gateway receives HTTP requests</p>
</li>
<li><p>Requests trigger Lambda functions</p>
</li>
<li><p>Lambda processes the logic</p>
</li>
<li><p>Response is returned to the client</p>
</li>
</ul>
<p>This architecture is widely used for building scalable backend APIs.</p>
<h2 id="heading-benefits-of-serverless-architecture">Benefits of Serverless Architecture</h2>
<p>Serverless provides:</p>
<ul>
<li><p>Automatic scaling</p>
</li>
<li><p>High availability</p>
</li>
<li><p>Reduced operational overhead</p>
</li>
<li><p>Cost efficiency for low-to-medium workloads</p>
</li>
</ul>
<h1 id="heading-pricing-amp-cost-management">Pricing &amp; Cost Management</h1>
<h2 id="heading-aws-pricing-model">AWS Pricing Model</h2>
<p>AWS follows a <strong>pay-as-you-go pricing model</strong>, meaning you only pay for the resources you use. There are <strong>no upfront costs</strong> for most services, making it flexible for beginners and businesses.</p>
<p>Key points:</p>
<ul>
<li><p>Pay per compute hour, storage, or request</p>
</li>
<li><p>Free Tier available for 12 months on selected services</p>
</li>
<li><p>Pricing varies by region and service type</p>
</li>
<li><p>Allows cost optimization based on usage</p>
</li>
</ul>
<h2 id="heading-aws-free-tier">AWS Free Tier</h2>
<p>The AWS Free Tier allows new users to <strong>experiment and learn</strong> AWS services without incurring charges.</p>
<p>Free Tier includes:</p>
<ul>
<li><p>750 hours of EC2 t2.micro instances per month</p>
</li>
<li><p>5 GB of S3 standard storage</p>
</li>
<li><p>750 hours of RDS Single-AZ db.t2.micro</p>
</li>
<li><p>Free Lambda requests up to 1 million per month</p>
</li>
</ul>
<h2 id="heading-cost-optimization-strategies">Cost Optimization Strategies</h2>
<p>To manage costs efficiently:</p>
<ul>
<li><p>Use <strong>Auto Scaling</strong> to adjust resources dynamically</p>
</li>
<li><p>Choose <strong>right instance types</strong> for workloads</p>
</li>
<li><p>Use <strong>S3 storage classes</strong> based on access patterns</p>
</li>
<li><p>Delete unused resources</p>
</li>
<li><p>Monitor usage using <strong>AWS Cost Explorer</strong></p>
</li>
</ul>
<h2 id="heading-billing-and-monitoring-tools">Billing and Monitoring Tools</h2>
<p>AWS provides tools to track and manage costs:</p>
<ul>
<li><p><strong>AWS Cost Explorer</strong> – visualize and analyze costs</p>
</li>
<li><p><strong>AWS Budgets</strong> – set alerts for budget thresholds</p>
</li>
<li><p><strong>AWS Trusted Advisor</strong> – recommendations to reduce cost</p>
</li>
</ul>
<h2 id="heading-why-cost-management-is-important">Why Cost Management is Important</h2>
<ul>
<li><p>Prevents unexpected bills</p>
</li>
<li><p>Ensures efficient resource usage</p>
</li>
<li><p>Helps optimize architecture</p>
</li>
<li><p>Essential for startups and personal projects</p>
</li>
</ul>
<h1 id="heading-real-world-example-hosting-a-web-application-on-aws">Real World Example: Hosting a Web Application on AWS</h1>
<h2 id="heading-scenario">Scenario</h2>
<p>You want to host a simple web application that serves users globally, stores data, and scales automatically with traffic.</p>
<p>AWS services used:</p>
<ul>
<li><p>EC2 for web servers</p>
</li>
<li><p>S3 for static assets</p>
</li>
<li><p>RDS for relational database</p>
</li>
<li><p>VPC for networking</p>
</li>
<li><p>Security Groups and IAM for access control</p>
</li>
<li><p>CloudWatch for monitoring</p>
</li>
<li><p>Auto Scaling and ELB for scalability</p>
</li>
<li><p>Lambda + API Gateway for serverless backend (optional)</p>
</li>
</ul>
<h2 id="heading-architecture-flow">Architecture Flow</h2>
<ol>
<li><p><strong>VPC &amp; Subnets</strong></p>
<ul>
<li><p>Public subnet hosts EC2 web servers</p>
</li>
<li><p>Private subnet hosts RDS database</p>
</li>
</ul>
</li>
<li><p><strong>Elastic Load Balancer (ELB)</strong></p>
<ul>
<li><p>Distributes incoming traffic across EC2 instances</p>
</li>
<li><p>Ensures high availability</p>
</li>
</ul>
</li>
<li><p><strong>Auto Scaling</strong></p>
<ul>
<li><p>Automatically adds or removes EC2 instances based on demand</p>
</li>
<li><p>Reduces cost and handles traffic spikes</p>
</li>
</ul>
</li>
<li><p><strong>S3 Storage</strong></p>
<ul>
<li><p>Stores images, CSS, JavaScript files, and backups</p>
</li>
<li><p>Can serve static content globally</p>
</li>
</ul>
</li>
<li><p><strong>IAM &amp; Security</strong></p>
<ul>
<li><p>Users, roles, and policies manage access to AWS resources</p>
</li>
<li><p>Security Groups control inbound and outbound traffic</p>
</li>
</ul>
</li>
<li><p><strong>Monitoring &amp; Logging</strong></p>
<ul>
<li><p>CloudWatch monitors metrics like CPU, memory, and traffic</p>
</li>
<li><p>CloudTrail tracks API calls and user activity</p>
</li>
</ul>
</li>
<li><p><strong>Optional Serverless Backend</strong></p>
<ul>
<li><p>Lambda functions handle API requests</p>
</li>
<li><p>API Gateway exposes HTTP endpoints</p>
</li>
<li><p>Reduces server management overhead</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-benefits-of-this-architecture">Benefits of This Architecture</h2>
<ul>
<li><p>High availability across multiple Availability Zones</p>
</li>
<li><p>Automatic scaling based on traffic</p>
</li>
<li><p>Secure access control with IAM and Security Groups</p>
</li>
<li><p>Centralized monitoring and alerting</p>
</li>
<li><p>Cost-efficient by paying only for resources used</p>
</li>
</ul>
<p>This architecture demonstrates a <strong>typical end-to-end AWS setup</strong> for a beginner to understand how multiple services work together in a real-world scenario.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with CI/CD]]></title><description><![CDATA[Introduction
In modern software development, shipping code quickly and reliably is just as important as writing the code itself. This is where CI/CD comes into play. Continuous Integration (CI) ensures that every code change is tested and validated a...]]></description><link>https://blog.sushant.dev/getting-started-with-cicd</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-cicd</guid><category><![CDATA[Docker]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[cicd]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sun, 07 Dec 2025 13:32:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765114253900/5bbc244b-ab78-4be7-aab1-172a6f669b00.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>In modern software development, shipping code quickly and reliably is just as important as writing the code itself. This is where <strong>CI/CD</strong> comes into play. Continuous Integration (CI) ensures that every code change is tested and validated automatically, while Continuous Deployment (CD) makes sure that successful changes are delivered to production without manual effort. Together, they allow teams to release features faster, reduce human errors, and maintain a predictable development workflow.</p>
<p>When it comes to implementing CI/CD for backend applications, <strong>Jenkins</strong>, <strong>Docker</strong>, and <strong>AWS</strong> form a powerful and production ready combination.</p>
<ul>
<li><p><strong>Jenkins</strong> provides a highly flexible automation engine that can run pipelines, integrate with GitHub, and orchestrate deployments.</p>
</li>
<li><p><strong>Docker</strong> ensures your application runs in a consistent environment across development, testing, and production.</p>
</li>
<li><p><strong>AWS</strong> offers scalable and reliable infrastructure with services like <strong>EC2</strong> for hosting and <strong>ECR</strong> for storing your container images.</p>
</li>
</ul>
<p>In this guide, you will learn how to build a complete CI/CD pipeline where every code update triggers an automated workflow:</p>
<p><strong>Code Push → Jenkins Pipeline → Docker Image Build → Push to Amazon ECR → Deploy on Amazon EC2</strong></p>
<p>By the end of this blog, you will have a fully automated deployment setup for your backend application using Jenkins Pipeline, Docker, Amazon ECR, and EC2.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765101004666/1429c74e-9580-4b7c-8ec0-c2003e69e12b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we begin, make sure you have the following tools and configurations ready. These will ensure a smooth setup of the CI/CD pipeline using Jenkins, Docker, Amazon ECR, and EC2.</p>
<h3 id="heading-1-aws-account"><strong>1. AWS Account</strong></h3>
<p>You need an active AWS account to create:</p>
<ul>
<li><p>EC2 instance (to host Jenkins and deploy backend)</p>
</li>
<li><p>ECR repository (to store Docker images)</p>
</li>
<li><p>IAM user/role with proper permissions</p>
</li>
</ul>
<h3 id="heading-2-ec2-instance-ubuntu-recommended"><strong>2. EC2 Instance (Ubuntu Recommended)</strong></h3>
<p>Launch an EC2 instance with:</p>
<ul>
<li><p><strong>Ubuntu 20.04 or 22.04</strong></p>
</li>
<li><p><strong>t2.micro</strong> (Free-tier) or higher</p>
</li>
<li><p>Open the following ports in the Security Group:</p>
<ul>
<li><p><strong>22</strong> → SSH</p>
</li>
<li><p><strong>8080</strong> → Jenkins</p>
</li>
<li><p><strong>3000/8081</strong> → Backend application port</p>
</li>
<li><p><strong>80 / 443</strong> (optional if using Nginx or HTTPS)</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-3-ssh-key-pair"><strong>3. SSH Key Pair</strong></h3>
<p>A key pair (.pem file) is required to:</p>
<ul>
<li><p>SSH into your EC2 instance</p>
</li>
<li><p>Allow Jenkins pipeline to log into EC2 for deployment</p>
</li>
</ul>
<h3 id="heading-4-github-repository"><strong>4. GitHub Repository</strong></h3>
<p>You need a backend project hosted on GitHub containing:</p>
<ul>
<li><p>Source code</p>
</li>
<li><p><code>Dockerfile</code> at the root folder</p>
</li>
<li><p>Jenkinsfile (or plan to add it during this guide)</p>
</li>
</ul>
<p>Supported backend languages include Node.js, Java, Python, Go, etc.</p>
<h3 id="heading-5-docker-installed-on-ec2"><strong>5. Docker Installed on EC2</strong></h3>
<p>Docker must be installed on your EC2 instance where the backend will run.<br />Jenkins will also use Docker to:</p>
<ul>
<li><p>Build the image</p>
</li>
<li><p>Tag and push it to ECR</p>
</li>
<li><p>Redeploy it on EC2</p>
</li>
</ul>
<h3 id="heading-6-jenkins-installed-on-ec2"><strong>6. Jenkins Installed on EC2</strong></h3>
<p>A running Jenkins server with:</p>
<ul>
<li><p>Admin access</p>
</li>
<li><p>Required plugins (Git, Docker Pipeline, Amazon ECR, SSH Agent)</p>
</li>
</ul>
<h3 id="heading-7-aws-cli-installed"><strong>7. AWS CLI Installed</strong></h3>
<p>AWS CLI is needed on the Jenkins server to authenticate with Amazon ECR.</p>
<h3 id="heading-8-basic-knowledge-requirements"><strong>8. Basic Knowledge Requirements</strong></h3>
<p>To follow this guide smoothly, you should have:</p>
<ul>
<li><p>Basic Linux/terminal knowledge</p>
</li>
<li><p>Understanding of Git workflow</p>
</li>
<li><p>Familiarity with Docker basics (build, run, push)</p>
</li>
<li><p>Minimal understanding of AWS EC2 and ECR</p>
</li>
</ul>
<h2 id="heading-step-1-set-up-your-ec2-instance"><strong>Step 1: Set Up Your EC2 Instance</strong></h2>
<p>To run Jenkins and deploy your backend application, we’ll use an Amazon EC2 instance. This instance will act as both your CI/CD server (Jenkins) and your deployment target.</p>
<h3 id="heading-11-launch-an-ec2-instance"><strong>1.1 Launch an EC2 Instance</strong></h3>
<p>Follow these steps in the AWS Console:</p>
<ol>
<li><p>Go to <strong>EC2 → Instances → Launch Instance</strong></p>
</li>
<li><p>Choose an AMI:</p>
<ul>
<li><strong>Ubuntu Server LTS</strong> (recommended for stability and Docker support)</li>
</ul>
</li>
<li><p>Select Instance Type:</p>
<ul>
<li><p><strong>t2.micro</strong> (free-tier eligible)</p>
</li>
<li><p>You can choose t2.small or higher if your Jenkins workload is heavier.</p>
</li>
</ul>
</li>
<li><p>Create or select an <strong>SSH Key Pair</strong> (.pem file)</p>
</li>
<li><p>Configure <strong>Security Group</strong> with the following inbound rules:</p>
<p> | Port | Purpose |
 | --- | --- |
 | 22 | SSH Access |
 | 8080 | Jenkins dashboard |
 | 3000 / 8081 | Backend application (use whichever your app exposes) |
 | 80 / 443 | Optional: If using Nginx or HTTPS |</p>
</li>
<li><p>Launch the instance.</p>
</li>
</ol>
<h3 id="heading-12-connect-to-ec2-via-ssh"><strong>1.2 Connect to EC2 via SSH</strong></h3>
<p>Use your terminal or PowerShell:</p>
<pre><code class="lang-bash">ssh -i your-key.pem ubuntu@&lt;EC2-PUBLIC-IP&gt;
</code></pre>
<p>Replace <code>&lt;EC2-PUBLIC-IP&gt;</code> with the public IPv4 address of your instance.</p>
<h3 id="heading-13-update-the-server"><strong>1.3 Update the Server</strong></h3>
<p>Once you're logged in, update system packages:</p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade -y
</code></pre>
<h3 id="heading-14-install-essential-tools"><strong>1.4 Install Essential Tools</strong></h3>
<p>Install basic tools needed later:</p>
<pre><code class="lang-bash">sudo apt install -y git curl unzip
</code></pre>
<h3 id="heading-15-understanding-what-this-ec2-will-do"><strong>1.5 Understanding What This EC2 Will Do</strong></h3>
<p>This single EC2 instance will be responsible for:</p>
<ul>
<li><p>Running <strong>Jenkins</strong> (CI/CD server)</p>
</li>
<li><p>Using Docker to <strong>build and push images</strong> to ECR</p>
</li>
<li><p>Deploying and running <strong>your backend application</strong> in a Docker container</p>
</li>
</ul>
<p>If you want to separate Jenkins from the deployment server, you can use two EC2 machines but for learning and medium sized projects, one is enough.</p>
<h2 id="heading-step-2-install-docker-on-ec2"><strong>Step 2: Install Docker on EC2</strong></h2>
<p>Docker is essential because both Jenkins and your EC2 server will use it to build, manage, and run your backend application inside containers.<br />Follow these steps to install Docker and prepare your EC2 instance for container-based deployments.</p>
<h3 id="heading-21-install-docker-engine"><strong>2.1 Install Docker Engine</strong></h3>
<p>Run the following commands to install Docker:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release
</code></pre>
<p>Add Docker’s official GPG key:</p>
<pre><code class="lang-bash">sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
</code></pre>
<p>Add Docker repository:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> \
  <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  <span class="hljs-subst">$(lsb_release -cs)</span> stable"</span> | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<p>Install Docker Engine:</p>
<pre><code class="lang-bash">sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
</code></pre>
<h3 id="heading-22-verify-docker-installation"><strong>2.2 Verify Docker Installation</strong></h3>
<p>Run:</p>
<pre><code class="lang-bash">docker --version
</code></pre>
<p>And test Docker:</p>
<pre><code class="lang-bash">sudo docker run hello-world
</code></pre>
<p>If you see the "Hello from Docker!" message, installation is successful.</p>
<h3 id="heading-23-allow-non-root-users-to-use-docker"><strong>2.3 Allow Non-Root Users to Use Docker</strong></h3>
<p>Add the <strong>ubuntu</strong> user to the Docker group:</p>
<pre><code class="lang-bash">sudo usermod -aG docker ubuntu
</code></pre>
<p>Also add the <strong>jenkins</strong> user (after Jenkins installation):</p>
<pre><code class="lang-bash">sudo usermod -aG docker jenkins
</code></pre>
<p>Apply the changes:</p>
<pre><code class="lang-bash">newgrp docker
</code></pre>
<h3 id="heading-24-enable-docker-to-start-on-boot"><strong>2.4 Enable Docker to Start on Boot</strong></h3>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> docker
sudo systemctl start docker
</code></pre>
<h3 id="heading-25-confirm-docker-permissions"><strong>2.5 Confirm Docker Permissions</strong></h3>
<p>Verify that Docker can run without <code>sudo</code>:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>Now Docker is fully set up on your EC2 server and ready to be used by Jenkins for image building and deployment.</p>
<h2 id="heading-step-3-install-jenkins-on-ec2-using-docker"><strong>Step 3: Install Jenkins on EC2 Using Docker</strong></h2>
<p>Instead of installing Jenkins manually with packages, you’ll run Jenkins as a <strong>Docker container</strong>. This method is cleaner, easier to update, and keeps your EC2 instance lightweight.</p>
<p>We will use the <strong>official Jenkins LTS Docker image</strong>.</p>
<h3 id="heading-31-create-a-directory-for-jenkins-data"><strong>3.1 Create a Directory for Jenkins Data</strong></h3>
<p>To ensure Jenkins data (jobs, plugins, configs) is not lost when the container restarts, create a persistent volume:</p>
<pre><code class="lang-bash">mkdir -p ~/jenkins_home
sudo chown -R 1000:1000 ~/jenkins_home
</code></pre>
<p>Jenkins inside Docker runs as user <code>1000</code>, so we give it permission.</p>
<h3 id="heading-32-run-jenkins-docker-container"><strong>3.2 Run Jenkins Docker Container</strong></h3>
<p>Run the Jenkins container mapped to port <strong>8080</strong>:</p>
<pre><code class="lang-bash">docker run -d \
  --name jenkins \
  -p 8080:8080 \
  -p 50000:50000 \
  -v ~/jenkins_home:/var/jenkins_home \
  jenkins/jenkins:lts
</code></pre>
<p><strong>What each parameter means:</strong></p>
<ul>
<li><p><code>-p 8080:8080</code> → Jenkins UI</p>
</li>
<li><p><code>-p 50000:50000</code> → For Jenkins agent communication</p>
</li>
<li><p><code>-v ~/jenkins_home:/var/jenkins_home</code> → Persistent Jenkins storage</p>
</li>
<li><p><code>jenkins/jenkins:lts</code> → Stable Jenkins version</p>
</li>
</ul>
<h3 id="heading-33-install-docker-inside-jenkins-container-important"><strong>3.3 Install Docker Inside Jenkins Container (Important)</strong></h3>
<p>Since Jenkins will build Docker images, the container must have access to Docker.</p>
<p>Give Jenkins container access to Docker daemon:</p>
<ol>
<li><p>Stop Jenkins container:</p>
<pre><code class="lang-bash"> docker stop jenkins
</code></pre>
</li>
<li><p>Re-run Jenkins with Docker socket mounted:</p>
<pre><code class="lang-bash"> docker run -d \
   --name jenkins \
   -p 8080:8080 \
   -p 50000:50000 \
   -v ~/jenkins_home:/var/jenkins_home \
   -v /var/run/docker.sock:/var/run/docker.sock \
   jenkins/jenkins:lts
</code></pre>
</li>
</ol>
<p>Now Jenkins can run docker commands directly using the host’s Docker engine.</p>
<h3 id="heading-34-access-jenkins-ui"><strong>3.4 Access Jenkins UI</strong></h3>
<p>Open in your browser:</p>
<pre><code class="lang-bash">http://EC2_PUBLIC_IP:8080
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765105295624/e2b714bd-7d34-498c-8ce1-4e8d7740fd53.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-35-retrieve-jenkins-initial-password"><strong>3.5 Retrieve Jenkins Initial Password</strong></h3>
<p>Inside the container, run:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it jenkins cat /var/jenkins_home/secrets/initialAdminPassword
</code></pre>
<p>Copy the password and paste it into the Jenkins setup screen.</p>
<h3 id="heading-36-install-recommended-plugins"><strong>3.6 Install Recommended Plugins</strong></h3>
<p>After login:</p>
<ol>
<li><p>Select <strong>Install suggested plugins</strong></p>
</li>
<li><p>Wait for Jenkins to install everything</p>
</li>
<li><p>Create your <strong>admin user</strong></p>
</li>
<li><p>Finish setup</p>
</li>
</ol>
<h3 id="heading-37-add-jenkins-user-to-docker-group-already-covered"><strong>3.7 Add Jenkins User to Docker Group (Already Covered)</strong></h3>
<p>Although Jenkins now uses the host Docker socket, ensure correct permissions:</p>
<pre><code class="lang-bash">sudo usermod -aG docker jenkins
</code></pre>
<p>Restart Jenkins container:</p>
<pre><code class="lang-bash">docker restart jenkins
</code></pre>
<h3 id="heading-38-verify-docker-from-jenkins"><strong>3.8 Verify Docker From Jenkins</strong></h3>
<p>Inside the Jenkins container:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it jenkins docker --version
</code></pre>
<p>If it prints a version, Jenkins can run Docker.</p>
<p>Jenkins is now fully configured inside Docker and ready to run CI/CD pipelines.</p>
<h2 id="heading-step-4-install-required-plugins-in-jenkins"><strong>Step 4: Install Required Plugins in Jenkins</strong></h2>
<p>To build a complete CI/CD pipeline that deploys your backend using Docker and Amazon ECR, Jenkins requires a few essential plugins. These plugins provide Git integration, Docker commands, ECR authentication, and SSH deployment capabilities.</p>
<h3 id="heading-41-access-plugin-manager"><strong>4.1 Access Plugin Manager</strong></h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Manage Plugins</strong></p>
</li>
<li><p>Switch to the <strong>Available</strong> tab</p>
</li>
<li><p>Search and install the following plugins (you can select multiple):</p>
</li>
</ol>
<h3 id="heading-42-essential-plugins"><strong>4.2 Essential Plugins</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Plugin</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Git Plugin</strong></td><td>Pull code from GitHub or other Git repositories</td></tr>
<tr>
<td><strong>Pipeline</strong></td><td>Enable pipeline jobs using Jenkinsfile</td></tr>
<tr>
<td><strong>Docker Pipeline</strong></td><td>Build and push Docker images inside Jenkins pipeline</td></tr>
<tr>
<td><strong>Amazon ECR</strong></td><td>Authenticate and push Docker images to Amazon ECR</td></tr>
<tr>
<td><strong>SSH Agent</strong></td><td>Allows Jenkins to SSH into EC2 for deployments</td></tr>
<tr>
<td><strong>Credentials Binding Plugin</strong></td><td>Safely store and use credentials in pipelines</td></tr>
</tbody>
</table>
</div><h3 id="heading-43-restart-jenkins-optional"><strong>4.3 Restart Jenkins (Optional)</strong></h3>
<p>Some plugins may require a restart. Jenkins will prompt you if necessary. You can also restart using Docker:</p>
<pre><code class="lang-bash">docker restart jenkins
</code></pre>
<h3 id="heading-44-configure-credentials-in-jenkins"><strong>4.4 Configure Credentials in Jenkins</strong></h3>
<p>You need to securely store the following credentials:</p>
<ol>
<li><p><strong>AWS Access Key &amp; Secret Key</strong></p>
<ul>
<li><p>Go to <strong>Manage Jenkins → Credentials → System → Global credentials → Add Credentials</strong></p>
</li>
<li><p>Kind: <strong>Username with password</strong> or <strong>AWS Credential</strong> (depends on plugin)</p>
</li>
<li><p>ID: <code>aws-ecr-credentials</code> (used in Jenkinsfile)</p>
</li>
</ul>
</li>
<li><p><strong>EC2 SSH Key</strong></p>
<ul>
<li><p>Add your <code>.pem</code> key as <strong>SSH Username with private key</strong></p>
</li>
<li><p>ID: <code>ec2-key</code> (used for deployment stage)</p>
</li>
</ul>
</li>
</ol>
<p>Once these plugins and credentials are set, Jenkins is fully ready to run <strong>pipeline jobs</strong> that build Docker images, push them to Amazon ECR, and deploy them on EC2.</p>
<h2 id="heading-step-5-create-amazon-ecr-repository"><strong>Step 5: Create Amazon ECR Repository</strong></h2>
<p>Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry that allows you to store, manage, and deploy Docker images. In this step, we’ll create a repository for your backend Docker images.</p>
<h3 id="heading-51-create-a-new-ecr-repository"><strong>5.1 Create a New ECR Repository</strong></h3>
<ol>
<li><p>Log in to the <strong>AWS Management Console</strong>.</p>
</li>
<li><p>Go to <strong>Services → ECR → Repositories → Create Repository</strong>.</p>
</li>
<li><p>Configure the repository:</p>
<ul>
<li><p><strong>Name</strong>: <code>backend-service</code> (or any name you prefer)</p>
</li>
<li><p><strong>Visibility</strong>: Private (recommended for production)</p>
</li>
<li><p><strong>Tags</strong>: Optional</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create Repository</strong>.</p>
</li>
</ol>
<h3 id="heading-52-note-the-repository-uri"><strong>5.2 Note the Repository URI</strong></h3>
<p>After creation, you will see the <strong>Repository URI</strong>. It looks like:</p>
<pre><code class="lang-bash">&lt;aws_account_id&gt;.dkr.ecr.&lt;region&gt;.amazonaws.com/backend-service
</code></pre>
<p>You’ll need this URI in your Jenkins pipeline to tag and push Docker images.</p>
<h3 id="heading-53-configure-aws-iam-user-or-role"><strong>5.3 Configure AWS IAM User or Role</strong></h3>
<p>Jenkins needs AWS credentials to authenticate and push Docker images to ECR.</p>
<ol>
<li><p>Go to <strong>IAM → Users → Add User</strong></p>
</li>
<li><p>Access type: <strong>Programmatic access</strong> (for AWS CLI)</p>
</li>
<li><p>Attach policies:</p>
<ul>
<li><p><code>AmazonEC2ContainerRegistryFullAccess</code></p>
</li>
<li><p><code>AmazonEC2FullAccess</code> (optional if Jenkins deploys directly to EC2)</p>
</li>
</ul>
</li>
<li><p>Copy <strong>Access Key ID</strong> and <strong>Secret Access Key</strong></p>
</li>
</ol>
<h3 id="heading-54-add-aws-credentials-to-jenkins"><strong>5.4 Add AWS Credentials to Jenkins</strong></h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials → System → Global credentials → Add Credentials</strong></p>
</li>
<li><p>Select <strong>Kind: AWS Credentials</strong> (or Username/Password)</p>
</li>
<li><p>Enter Access Key and Secret Key</p>
</li>
<li><p>Set <strong>ID</strong> as <code>aws-ecr-credentials</code> (you’ll use this in the Jenkinsfile)</p>
</li>
</ol>
<p>Once the ECR repository and credentials are ready, Jenkins can push Docker images from your CI/CD pipeline to Amazon ECR.</p>
<h3 id="heading-55-add-github-credentials-in-jenkins"><strong>5.5 Add GitHub Credentials in Jenkins</strong></h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials → System → Global credentials → Add Credentials</strong></p>
</li>
<li><p>Choose <strong>Kind:</strong> <code>Username with password</code> (or <strong>Personal Access Token</strong> if using HTTPS)</p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Username:</strong> Your GitHub username</p>
</li>
<li><p><strong>Password:</strong> Your GitHub password or Personal Access Token (PAT)</p>
</li>
</ul>
</li>
<li><p>Set <strong>ID:</strong> <code>github-credentials</code></p>
</li>
</ol>
<blockquote>
<p>If your repository is private, Jenkins will need these credentials to clone the repo.</p>
</blockquote>
<h3 id="heading-56-ec2-ssh-key-ec2-key"><strong>5.6 EC2 SSH Key (</strong><code>ec2-key</code>)</h3>
<ol>
<li><p>When you launch your EC2 instance, you create a <strong>.pem key pair</strong>. This key allows secure SSH access to the instance.</p>
</li>
<li><p>To let Jenkins deploy the backend automatically, you need to <strong>add this key in Jenkins</strong>:</p>
</li>
</ol>
<p><strong>Steps:</strong></p>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials → System → Global credentials → Add Credentials</strong></p>
</li>
<li><p>Choose <strong>Kind:</strong> <code>SSH Username with private key</code></p>
</li>
<li><p>Fill in the details:</p>
<ul>
<li><p><strong>Username:</strong> <code>ubuntu</code> (default for Ubuntu EC2)</p>
</li>
<li><p><strong>Private Key:</strong> Enter directly or upload your <code>.pem</code> file</p>
</li>
<li><p><strong>ID:</strong> <code>ec2-key</code> (used in Jenkinsfile)</p>
</li>
</ul>
</li>
<li><p>Click <strong>OK</strong> to save.</p>
</li>
</ol>
<h2 id="heading-step-6-write-your-dockerfile-amp-docker-compose-for-backend"><strong>Step 6: Write Your Dockerfile &amp; Docker Compose for Backend</strong></h2>
<blockquote>
<p><strong>Note:</strong> You can <strong>clone this repository</strong> for reference and follow along with the examples in this blog. It contains the full backend project, Dockerfile, and Docker Compose setup.</p>
<p>link :- <a target="_blank" href="https://github.com/sushant4612/backend-server">https://github.com/sushant4612/backend-server</a></p>
</blockquote>
<p>In this step, we will containerize a <strong>Node.js backend</strong> that uses <strong>MongoDB</strong>. We’ll use a <strong>Dockerfile</strong> for the backend service and <strong>Docker Compose</strong> to orchestrate both backend and database containers together.</p>
<h3 id="heading-61-dockerfile-for-the-backend"><strong>6.1 Dockerfile for the Backend</strong></h3>
<pre><code class="lang-bash">FROM node:18-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . .

EXPOSE 5000

CMD [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><strong>Base Image:</strong> <code>node:18-alpine</code> provides a lightweight Node.js environment.</p>
</li>
<li><p><strong>WORKDIR:</strong> Sets the working directory inside the container.</p>
</li>
<li><p><strong>COPY + RUN:</strong> Installs dependencies using <code>npm install</code>.</p>
</li>
<li><p><strong>EXPOSE 5000:</strong> Exposes port 5000 for the backend.</p>
</li>
<li><p><strong>CMD:</strong> Starts the backend using <code>npm start</code>.</p>
</li>
</ul>
<h3 id="heading-62-docker-compose-for-multi-container-setup"><strong>6.2 Docker Compose for Multi Container Setup</strong></h3>
<pre><code class="lang-bash">version: <span class="hljs-string">'3.8'</span>

services:
  app:
    build: .
    ports:
      - <span class="hljs-string">"5000:5000"</span>
    environment:
      - MONGODB_URI=mongodb://mongo:27017/express-mongo-app
      - PORT=5000
    depends_on:
      - mongo
    restart: unless-stopped
    networks:
      - app-network

  mongo:
    image: mongo:6.0
    ports:
      - <span class="hljs-string">"27017:27017"</span>
    volumes:
      - mongodb_data:/data/db
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

volumes:
  mongodb_data:
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p><strong>app service:</strong></p>
<ul>
<li><p>Builds the backend from the Dockerfile.</p>
</li>
<li><p>Connects to MongoDB using <code>MONGODB_URI</code>.</p>
</li>
<li><p>Automatically restarts unless manually stopped.</p>
</li>
</ul>
</li>
<li><p><strong>mongo service:</strong></p>
<ul>
<li><p>Uses official MongoDB image.</p>
</li>
<li><p>Persists data using a Docker volume (<code>mongodb_``data</code>).</p>
</li>
</ul>
</li>
<li><p><strong>Networks:</strong> Both services are connected to <code>app-network</code> so the backend can reach MongoDB by hostname <code>mongo</code>.</p>
</li>
<li><p><strong>Volumes:</strong> Ensures MongoDB data is not lost when containers restart.</p>
</li>
</ul>
<h3 id="heading-63-run-the-multi-container-setup"><strong>6.3 Run the Multi-Container Setup</strong></h3>
<p>Run the following command locally to test:</p>
<pre><code class="lang-bash">docker-compose up --build
</code></pre>
<ul>
<li><p>Backend will be available at <code>http://localhost:5000</code></p>
</li>
<li><p>MongoDB will be accessible internally at <code>mongo:27017</code></p>
</li>
</ul>
<p>This setup ensures your backend and database run together in isolated containers. Later, the <strong>Jenkins pipeline</strong> will automate building and deploying this setup to EC2.</p>
<h2 id="heading-step-7-create-a-jenkins-pipeline-job-and-jenkinsfile"><strong>Step 7: Create a Jenkins Pipeline Job and Jenkinsfile</strong></h2>
<p>Now that your Dockerfile and Docker Compose setup are ready, we will create a <strong>Jenkins Pipeline</strong> that automates the process of building, pushing, and deploying your backend application.</p>
<h3 id="heading-71-create-a-new-pipeline-job-in-jenkins"><strong>7.1 Create a New Pipeline Job in Jenkins</strong></h3>
<ol>
<li><p>Open your Jenkins dashboard:</p>
<pre><code class="lang-bash"> http://&lt;EC2_PUBLIC_IP&gt;:8080
</code></pre>
</li>
<li><p>Click <strong>New Item</strong> → Enter <strong>Job Name</strong> (e.g., <code>Backend-CI-CD</code>) → Select <strong>Pipeline</strong> → Click <strong>OK</strong></p>
</li>
<li><p>Scroll down to <strong>Pipeline</strong> section → Choose <strong>Pipeline script from SCM</strong></p>
<ul>
<li><p><strong>SCM:</strong> Git</p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/your-username/your-repo.git"><code>https://github.com/your-username/your-repo.git</code></a></p>
</li>
<li><p><strong>Branch:</strong> <code>main</code></p>
</li>
<li><p><strong>Script Path:</strong> <code>Jenkinsfile</code></p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p>Jenkins will now fetch the Jenkinsfile from your repo and execute it for each build.</p>
</blockquote>
<h3 id="heading-72-example-jenkinsfile"><strong>7.2 Example Jenkinsfile</strong></h3>
<p>This Jenkinsfile will:</p>
<ol>
<li><p>Checkout code from GitHub</p>
</li>
<li><p>Login to Amazon ECR</p>
</li>
<li><p>Build Docker image</p>
</li>
<li><p>Push image to ECR</p>
</li>
<li><p>Deploy backend on EC2</p>
</li>
</ol>
<pre><code class="lang-bash">pipeline {
    agent any

    environment {
        AWS_REGION = <span class="hljs-string">"ap-south-1"</span>
        ECR_REPO = <span class="hljs-string">"your_account_id.dkr.ecr.ap-south-1.amazonaws.com/backend-service"</span>
    }

    stages {

        steps {
            git branch: <span class="hljs-string">'main'</span>, 
                url: <span class="hljs-string">'https://github.com/your-username/your-repo.git'</span>,
                credentialsId: <span class="hljs-string">'github-credentials'</span>
        }

        stage(<span class="hljs-string">'Login to ECR'</span>) {
            steps {
                withCredentials([[<span class="hljs-variable">$class</span>: <span class="hljs-string">'AmazonWebServicesCredentialsBinding'</span>, credentialsId: <span class="hljs-string">'aws-ecr-credentials'</span>]]) {
                    sh <span class="hljs-string">"aws ecr get-login-password --region <span class="hljs-variable">$AWS_REGION</span> | docker login --username AWS --password-stdin <span class="hljs-variable">$ECR_REPO</span>"</span>
                }
            }
        }

        stage(<span class="hljs-string">'Build Docker Image'</span>) {
            steps {
                sh <span class="hljs-string">"docker build -t backend-service ."</span>
            }
        }

        stage(<span class="hljs-string">'Tag Docker Image'</span>) {
            steps {
                sh <span class="hljs-string">"docker tag backend-service:latest <span class="hljs-variable">$ECR_REPO</span>:latest"</span>
            }
        }

        stage(<span class="hljs-string">'Push to ECR'</span>) {
            steps {
                sh <span class="hljs-string">"docker push <span class="hljs-variable">$ECR_REPO</span>:latest"</span>
            }
        }

        stage(<span class="hljs-string">'Deploy to EC2'</span>) {
            steps {
                sshagent(credentials: [<span class="hljs-string">'ec2-key'</span>]) {
                    sh <span class="hljs-string">""</span><span class="hljs-string">"
                    ssh -o StrictHostKeyChecking=no ubuntu@&lt;EC2_PUBLIC_IP&gt; '
                        docker pull <span class="hljs-variable">$ECR_REPO</span>:latest &amp;&amp;
                        docker stop backend || true &amp;&amp;
                        docker rm backend || true &amp;&amp;
                        docker run -d -p 5000:5000 --name backend <span class="hljs-variable">$ECR_REPO</span>:latest
                    '
                    "</span><span class="hljs-string">""</span>
                }
            }
        }
    }
}
</code></pre>
<h3 id="heading-73-key-points-of-this-pipeline"><strong>7.3 Key Points of This Pipeline</strong></h3>
<ul>
<li><p><strong>Checkout Code:</strong> Pulls the latest code from GitHub.</p>
</li>
<li><p><strong>Login to ECR:</strong> Uses AWS CLI to authenticate Docker with Amazon ECR.</p>
</li>
<li><p><strong>Build &amp; Tag Docker Image:</strong> Builds backend container and tags it with the ECR repository URI.</p>
</li>
<li><p><strong>Push to ECR:</strong> Pushes the Docker image to your private ECR repository.</p>
</li>
<li><p><strong>Deploy to EC2:</strong> SSH into the EC2 server, stop any running container, remove it, pull the new image, and start it.</p>
</li>
</ul>
<h3 id="heading-74-configure-jenkins-credentials"><strong>7.4 Configure Jenkins Credentials</strong></h3>
<p>Make sure you have added the following credentials in Jenkins:</p>
<ol>
<li><p><strong>AWS Credentials</strong> → ID: <code>aws-ecr-credentials</code></p>
</li>
<li><p><strong>EC2 SSH Key</strong> → ID: <code>ec2-key</code></p>
</li>
</ol>
<p>These IDs are referenced in the Jenkinsfile.</p>
<h3 id="heading-75-trigger-the-pipeline"><strong>7.5 Trigger the Pipeline</strong></h3>
<ol>
<li><p>Save the pipeline job.</p>
</li>
<li><p>Click <strong>Build Now</strong> → Jenkins will start the CI/CD pipeline.</p>
</li>
<li><p>Check console output for logs of each stage.</p>
</li>
<li><p>Once complete, your backend will be running on EC2 at:</p>
</li>
</ol>
<pre><code class="lang-bash">http://&lt;EC2_PUBLIC_IP&gt;:5000
</code></pre>
<p>This completes the automation setup for your backend deployment using <strong>Jenkins, Docker, Amazon ECR, and EC2</strong>.</p>
<h2 id="heading-step-8-verify-deployment-on-ec2"><strong>Step 8: Verify Deployment on EC2</strong></h2>
<p>Once the Jenkins pipeline successfully completes all stages (build → push → deploy), the next step is to verify whether your backend is running correctly on your EC2 instance.</p>
<h3 id="heading-81-check-running-containers-on-ec2"><strong>8.1 Check Running Containers on EC2</strong></h3>
<p>SSH into your EC2 instance manually:</p>
<pre><code class="lang-bash">ssh -i your-key.pem ubuntu@&lt;EC2_PUBLIC_IP&gt;
</code></pre>
<p>Then check the running containers:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>You should see the container named <code>backend</code> running with port mapping:</p>
<pre><code class="lang-bash">0.0.0.0:5000 -&gt; 5000/tcp
</code></pre>
<h3 id="heading-82-test-the-api-endpoint"><strong>8.2 Test the API Endpoint</strong></h3>
<p>Open your browser or run curl:</p>
<pre><code class="lang-bash">curl http://&lt;EC2_PUBLIC_IP&gt;:5000
</code></pre>
<p>If your backend has a <code>/health</code> or <code>/</code> route, you should get a response like:</p>
<pre><code class="lang-bash">{ <span class="hljs-string">"status"</span>: <span class="hljs-string">"ok"</span>, <span class="hljs-string">"message"</span>: <span class="hljs-string">"Server is running"</span> }
</code></pre>
<p>This confirms the deployment is successful.</p>
<h3 id="heading-83-check-application-logs"><strong>8.3 Check Application Logs</strong></h3>
<p>If something goes wrong, check logs:</p>
<pre><code class="lang-bash">docker logs backend
</code></pre>
<p>Look for:</p>
<ul>
<li><p>Database connection errors</p>
</li>
<li><p>Missing environment variables</p>
</li>
<li><p>Port binding issues</p>
</li>
<li><p>Crash loops</p>
</li>
</ul>
<h3 id="heading-84-check-mongodb-container-if-running-on-ec2"><strong>8.4 Check MongoDB Container (If Running on EC2)</strong></h3>
<p>If your EC2 also hosts Mongo:</p>
<pre><code class="lang-bash">docker ps | grep mongo
</code></pre>
<p>If it's running through Docker Compose on EC2, make sure both containers are attached to the same network.</p>
<h3 id="heading-85-debug-common-network-issues"><strong>8.5 Debug Common Network Issues</strong></h3>
<p>If <code>curl</code> does not return a response:</p>
<ul>
<li><p>Check EC2 Security Group and ensure port 5000 is open</p>
</li>
<li><p>Check Docker container port exposure</p>
</li>
<li><p>Check if the container is restarting:</p>
</li>
</ul>
<pre><code class="lang-bash">docker ps -a
</code></pre>
<ul>
<li>Inspect container logs:</li>
</ul>
<pre><code class="lang-bash">docker logs backend
</code></pre>
<h2 id="heading-step-9-common-issues-and-how-to-fix-them"><strong>Step 9: Common Issues and How to Fix Them</strong></h2>
<p>Even with a well configured CI/CD pipeline, you may face issues during deployment. This section covers the most common problems and how to resolve them efficiently.</p>
<h3 id="heading-91-jenkins-cannot-clone-github-repository"><strong>9.1 Jenkins Cannot Clone GitHub Repository</strong></h3>
<p><strong>Error:</strong><br /><code>Authentication failed</code> or <code>Repository not found</code></p>
<p><strong>Causes:</strong></p>
<ul>
<li><p>GitHub repository is private</p>
</li>
<li><p>Missing or wrong GitHub credentials</p>
</li>
<li><p>Incorrect credentialsId in Jenkinsfile</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<ol>
<li><p>Add GitHub credentials in Jenkins (Personal Access Token recommended).</p>
</li>
<li><p>Set the credential ID in the Jenkinsfile:</p>
</li>
</ol>
<pre><code class="lang-bash">credentialsId: <span class="hljs-string">'github-credentials'</span>
</code></pre>
<h3 id="heading-92-jenkins-fails-to-login-to-ecr"><strong>9.2 Jenkins Fails to Login to ECR</strong></h3>
<p><strong>Error:</strong><br /><code>Cannot connect to the Docker daemon</code><br />or<br /><code>unauthorized: authentication required</code></p>
<p><strong>Causes:</strong></p>
<ul>
<li><p>Jenkins is missing AWS credentials</p>
</li>
<li><p>ECR repository does not exist</p>
</li>
<li><p>Docker is not running inside Jenkins container</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<ol>
<li><p>Create ECR repo manually:</p>
<pre><code class="lang-bash"> aws ecr create-repository --repository-name backend-service
</code></pre>
</li>
<li><p>Add AWS credentials in Jenkins with ID: <code>aws-ecr-credentials</code></p>
</li>
<li><p>Ensure Docker is installed and running inside Jenkins container.</p>
</li>
</ol>
<h3 id="heading-93-docker-build-fails-in-jenkins"><strong>9.3 Docker Build Fails in Jenkins</strong></h3>
<p><strong>Common reasons:</strong></p>
<ul>
<li><p>Wrong Dockerfile path</p>
</li>
<li><p>Missing environment variables</p>
</li>
<li><p>Broken application code</p>
</li>
<li><p>Missing package.json or node_modules issues</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<ul>
<li><p>Verify Dockerfile is inside project root.</p>
</li>
<li><p>Run this locally to confirm Dockerfile builds:</p>
<pre><code class="lang-bash">  docker build -t test-backend .
</code></pre>
</li>
</ul>
<h3 id="heading-94-jenkins-cannot-ssh-into-ec2"><strong>9.4 Jenkins Cannot SSH Into EC2</strong></h3>
<p><strong>Error:</strong><br /><code>Permission denied (publickey)</code><br />or<br /><code>Host key verification failed</code></p>
<p><strong>Causes:</strong></p>
<ul>
<li><p>Wrong username (use <code>ubuntu</code> for Ubuntu AMIs)</p>
</li>
<li><p>Incorrect SSH key added in Jenkins</p>
</li>
<li><p>Missing SSH agent plugin</p>
</li>
<li><p>Public IP changed for EC2</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<ol>
<li><p>Add EC2 SSH key (<code>ec2-key</code>) as type: <strong>SSH Username with Private Key</strong></p>
</li>
<li><p>Username should be correct:</p>
<ul>
<li>Ubuntu AMI: <code>ubuntu</code></li>
</ul>
</li>
<li><p>Disable strict host checking in command:</p>
<pre><code class="lang-bash"> ssh -o StrictHostKeyChecking=no ubuntu@&lt;EC2_PUBLIC_IP&gt;
</code></pre>
</li>
</ol>
<h3 id="heading-95-ec2-deployment-issues-after-pulling-image"><strong>9.5 EC2 Deployment Issues After Pulling Image</strong></h3>
<p><strong>Error when running container:</strong><br /><code>Address already in use</code><br />or backend stops immediately.</p>
<p><strong>Causes:</strong></p>
<ul>
<li><p>A container is already running on port 5000</p>
</li>
<li><p>Environment variables missing</p>
</li>
<li><p>Crash due to MongoDB URI issues</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<pre><code class="lang-bash">docker stop backend || <span class="hljs-literal">true</span>
docker rm backend || <span class="hljs-literal">true</span>
docker run -d -p 5000:5000 --name backend &lt;image-url&gt;
</code></pre>
<p>Check logs:</p>
<pre><code class="lang-bash">docker logs backend
</code></pre>
<h3 id="heading-96-application-cannot-connect-to-mongodb"><strong>9.6 Application Cannot Connect to MongoDB</strong></h3>
<p><strong>Common causes:</strong></p>
<ul>
<li><p>Wrong MongoDB hostname</p>
</li>
<li><p>MongoDB not running</p>
</li>
<li><p>MongoDB is inside Docker but backend is not using the correct network</p>
</li>
<li><p>If deployed on EC2 with one container only, MongoDB URI is wrong</p>
</li>
</ul>
<p><strong>Fix:</strong></p>
<ul>
<li><p>If using Docker Compose locally, use:</p>
<pre><code class="lang-bash">  mongodb://mongo:27017/express-mongo-app
</code></pre>
</li>
<li><p>If using external MongoDB (Atlas or EC2), use the correct connection string.</p>
</li>
<li><p>Confirm Mongo is running:</p>
</li>
</ul>
<pre><code class="lang-bash">docker ps | grep mongo
</code></pre>
<h3 id="heading-97-pipeline-works-but-ec2-shows-old-code"><strong>9.7 Pipeline Works But EC2 Shows Old Code</strong></h3>
<p><strong>Cause:</strong> Docker container on EC2 is not updated correctly.</p>
<p><strong>Fix:</strong><br />Ensure pipeline uses these steps:</p>
<pre><code class="lang-bash">docker pull &lt;repo&gt;:latest
docker stop backend || <span class="hljs-literal">true</span>
docker rm backend || <span class="hljs-literal">true</span>
docker run -d -p 5000:5000 --name backend &lt;repo&gt;:latest
</code></pre>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this guide, you learned how to build a complete CI/CD pipeline for deploying a backend application using Jenkins, Docker, AWS ECR, and an EC2 instance. By the end of the setup, your workflow is fully automated:</p>
<ol>
<li><p>Push code to GitHub</p>
</li>
<li><p>Jenkins pulls the latest code</p>
</li>
<li><p>Jenkins builds a Docker image</p>
</li>
<li><p>The image is pushed to Amazon ECR</p>
</li>
<li><p>The latest container is deployed automatically on EC2</p>
</li>
</ol>
<p>This approach ensures faster deployments, consistent environments, and minimal manual work. You can extend this pipeline further by adding:</p>
<ul>
<li><p>Automated tests</p>
</li>
<li><p>Blue green or rolling deployments</p>
</li>
<li><p>Monitoring and alerting</p>
</li>
<li><p>HTTPS with Nginx or AWS Load Balancer</p>
</li>
<li><p>Multi environment pipelines (dev, stage, prod)</p>
</li>
</ul>
<p>By adopting Jenkins with Docker and AWS, you make your backend deployment process more reliable, scalable, and ready for real-world production use.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started With Jenkins]]></title><description><![CDATA[What is Jenkins?
Simple Definition
Jenkins is an open-source automation server that helps automate the process of building, testing, and deploying applications.
In simple words, it is a tool that runs tasks automatically so developers don’t have to d...]]></description><link>https://blog.sushant.dev/getting-started-with-jenkins</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-jenkins</guid><category><![CDATA[Jenkins]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sat, 06 Dec 2025 09:47:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765014398306/47e3db26-a974-4fb5-bbed-6589f65601d2.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-jenkins"><strong>What is Jenkins?</strong></h2>
<h3 id="heading-simple-definition"><strong>Simple Definition</strong></h3>
<p><strong>Jenkins is an open-source automation server that helps automate the process of building, testing, and deploying applications.</strong></p>
<p>In simple words, it is a tool that runs tasks automatically so developers don’t have to do them manually again and again.</p>
<h3 id="heading-why-it-is-used"><strong>Why It Is Used</strong></h3>
<p>Jenkins is mainly used to enable <strong>CI/CD</strong> (Continuous Integration and Continuous Delivery).</p>
<p>This means:</p>
<ul>
<li><p>Every time developers push code, Jenkins automatically checks it.</p>
</li>
<li><p>It builds the project, runs tests, and reports errors quickly.</p>
</li>
<li><p>It can also deploy the project automatically to servers.</p>
</li>
</ul>
<h3 id="heading-reasons-jenkins-is-used"><strong>Reasons Jenkins is used:</strong></h3>
<ul>
<li><p>To save time by automating repetitive tasks</p>
</li>
<li><p>To detect bugs early</p>
</li>
<li><p>To make deployments faster and safer</p>
</li>
<li><p>To improve team productivity</p>
</li>
<li><p>To maintain consistent software delivery</p>
</li>
</ul>
<h2 id="heading-why-jenkins-benefits"><strong>Why Jenkins? (Benefits)</strong></h2>
<p>Jenkins is widely used in DevOps because it simplifies and accelerates the software development and delivery process. Below are the core benefits:</p>
<h3 id="heading-continuous-integration-ci-and-continuous-delivery-cd"><strong>Continuous Integration (CI) and Continuous Delivery (CD)</strong></h3>
<p>Jenkins is primarily used to implement CI/CD pipelines.</p>
<p><strong>Continuous Integration (CI):</strong><br />Whenever a developer pushes code, Jenkins automatically fetches the latest changes, builds the project, and runs tests. This ensures errors are detected early in the development cycle.</p>
<p><strong>Continuous Delivery (CD):</strong><br />After successful testing, Jenkins can automatically deploy the application to servers or cloud environments, enabling smooth and consistent delivery.</p>
<h3 id="heading-automation"><strong>Automation</strong></h3>
<p>Before Jenkins, build, test, and deployment processes were performed manually, which was slow and error prone.<br />Jenkins automates these repetitive tasks, including:</p>
<ul>
<li><p>Code compilation</p>
</li>
<li><p>Running test suites</p>
</li>
<li><p>Deploying applications</p>
</li>
<li><p>Running scripts or commands</p>
</li>
<li><p>Generating reports</p>
</li>
</ul>
<p>Automation significantly reduces manual effort and increases team productivity.</p>
<h3 id="heading-faster-builds-and-deployments"><strong>Faster Builds and Deployments</strong></h3>
<p>With automated checks and deployments, developers receive quicker feedback, and applications can be delivered faster.<br />This leads to:</p>
<ul>
<li><p>Faster release cycles</p>
</li>
<li><p>Early bug detection</p>
</li>
<li><p>More reliable deployments</p>
</li>
<li><p>Improved overall development efficiency</p>
</li>
</ul>
<h2 id="heading-jenkins-architecture"><strong>Jenkins Architecture</strong></h2>
<p>Jenkins follows a simple yet powerful architecture consisting of two main components: the <strong>Controller (Master)</strong> and <strong>Agents (Nodes)</strong>. Understanding these components helps explain how Jenkins executes jobs efficiently.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765009346160/10d05bb6-3974-4e83-a4df-b4154fafe065.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-controller-master"><strong>Controller (Master)</strong></h3>
<p>The Jenkins Controller is the central part of the system. Its responsibilities include:</p>
<ul>
<li><p>Managing the Jenkins UI and dashboard</p>
</li>
<li><p>Scheduling and distributing jobs</p>
</li>
<li><p>Maintaining configurations, plugins, and security settings</p>
</li>
<li><p>Monitoring overall system health</p>
</li>
</ul>
<p>Although the controller can run jobs, its primary role is coordination and management.</p>
<h3 id="heading-agent-node"><strong>Agent (Node)</strong></h3>
<p>Agents are machines (physical or virtual) that execute the actual tasks assigned by the controller.<br />An agent can run on:</p>
<ul>
<li><p>A local machine</p>
</li>
<li><p>A remote Linux/Windows server</p>
</li>
<li><p>A Docker container</p>
</li>
<li><p>A cloud instance</p>
</li>
</ul>
<p>Agents help distribute workloads, allowing multiple jobs to run in parallel and improving performance.</p>
<h3 id="heading-how-a-job-runs-in-jenkins"><strong>How a Job Runs in Jenkins</strong></h3>
<p>The basic flow of job execution in Jenkins is:</p>
<ol>
<li><p>A developer pushes code to the source repository (e.g., GitHub).</p>
</li>
<li><p>Jenkins detects the change through a trigger or scheduled check.</p>
</li>
<li><p>The controller assigns the job to an available agent.</p>
</li>
<li><p>The agent pulls the latest code, builds it, and executes the defined steps (tests, scripts, deployments, etc.).</p>
</li>
<li><p>The agent sends the results back to the controller.</p>
</li>
<li><p>The controller displays build status, logs, and reports on the Jenkins dashboard.</p>
</li>
</ol>
<h2 id="heading-installing-jenkins"><strong>Installing Jenkins</strong></h2>
<p>There are multiple ways to install Jenkins, but using Docker is one of the simplest and most efficient methods, especially for beginners. Below is a recommended approach followed by the basic setup steps.</p>
<h3 id="heading-installing-jenkins-using-docker-recommended"><strong>Installing Jenkins Using Docker (Recommended)</strong></h3>
<p>Docker allows you to run Jenkins in an isolated container without manually configuring system dependencies.</p>
<p><strong>Command to run Jenkins using Docker:</strong></p>
<pre><code class="lang-bash">docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
</code></pre>
<p>This command:</p>
<ul>
<li><p>Pulls the Jenkins LTS (Long-Term Support) image</p>
</li>
<li><p>Exposes port <strong>8080</strong> for accessing Jenkins</p>
</li>
<li><p>Exposes port <strong>50000</strong> for agent communication</p>
</li>
<li><p>Starts Jenkins inside a container</p>
</li>
</ul>
<p>Once the container is running, you can access Jenkins in a browser at:</p>
<pre><code class="lang-bash">http://localhost:8080
</code></pre>
<h3 id="heading-basic-setup-steps"><strong>Basic Setup Steps</strong></h3>
<p>After starting Jenkins for the first time, follow these setup steps:</p>
<ol>
<li><p><strong>Unlock Jenkins:</strong><br /> Jenkins provides an initial admin password. You can retrieve it using the Docker logs or by checking the Jenkins home directory inside the container.</p>
</li>
<li><p><strong>Install Suggested Plugins:</strong><br /> Jenkins recommends a default set of plugins needed for common tasks. Installing them ensures essential features are available immediately.</p>
</li>
<li><p><strong>Create an Admin User:</strong><br /> Set up a username, password, and email that you will use to log in.</p>
</li>
<li><p><strong>Configure Instance Settings:</strong><br /> Confirm the Jenkins URL and any other basic settings displayed during setup.</p>
</li>
<li><p><strong>Jenkins is Ready:</strong><br /> After completing these steps, Jenkins will redirect you to the dashboard, where you can start creating jobs.</p>
</li>
</ol>
<h2 id="heading-jenkins-dashboard-overview"><strong>Jenkins Dashboard Overview</strong></h2>
<p>Once Jenkins is installed and set up, the dashboard serves as the central interface for managing and monitoring all activities. Understanding the main sections of the dashboard helps you navigate Jenkins effectively.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765013149407/e0fc0e6f-6c0a-4196-9f5c-3ab1395ddc5d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-dashboard-components"><strong>Dashboard Components</strong></h3>
<h4 id="heading-1-main-menu-left-side-panel"><strong>1. Main Menu (Left Side Panel)</strong></h4>
<p>This panel provides access to essential options such as:</p>
<ul>
<li><p><strong>New Item:</strong> Create a new job or pipeline</p>
</li>
<li><p><strong>People:</strong> View user information</p>
</li>
<li><p><strong>Build History:</strong> View previously executed jobs</p>
</li>
<li><p><strong>Manage Jenkins:</strong> Configure system settings, plugins, security, and tools</p>
</li>
<li><p><strong>My Views:</strong> Create custom dashboard views</p>
</li>
<li><p><strong>Credentials:</strong> Store and manage secrets (passwords, tokens, SSH keys)</p>
</li>
</ul>
<h4 id="heading-2-job-list"><strong>2. Job List</strong></h4>
<p>The central area displays all existing Jenkins jobs. For each job, you can see:</p>
<ul>
<li><p>Job name</p>
</li>
<li><p>Current build status</p>
</li>
<li><p>Last build result</p>
</li>
<li><p>Build activity trends</p>
</li>
</ul>
<p>This section provides a quick overview of ongoing and past tasks.</p>
<h4 id="heading-3-build-history-right-side-panel-or-page"><strong>3. Build History (Right Side Panel or Page)</strong></h4>
<p>Shows a timeline of recent builds with their status:</p>
<ul>
<li><p>Successful (blue/green)</p>
</li>
<li><p>Failed (red)</p>
</li>
<li><p>Unstable (yellow)</p>
</li>
</ul>
<p>You can click on any build to view detailed logs and results.</p>
<h4 id="heading-4-manage-jenkins"><strong>4. Manage Jenkins</strong></h4>
<p>This is the most important administrative section, where you can:</p>
<ul>
<li><p>Install or update plugins</p>
</li>
<li><p>Manage global settings</p>
</li>
<li><p>Configure tools (JDK, Maven, Git)</p>
</li>
<li><p>Manage nodes and agents</p>
</li>
<li><p>Apply security settings</p>
</li>
<li><p>Backup and restore configurations</p>
</li>
</ul>
<h4 id="heading-5-system-information-and-logs"><strong>5. System Information and Logs</strong></h4>
<p>Provides detailed insights into system performance, environment variables, and Jenkins logs. Useful for troubleshooting.</p>
<h2 id="heading-creating-your-first-jenkins-job"><strong>Creating Your First Jenkins Job</strong></h2>
<p>Creating a basic job in Jenkins helps you understand how automation works within the system. One of the simplest ways to get started is by creating a <strong>Freestyle Project</strong>.</p>
<h3 id="heading-step-by-step-creating-a-freestyle-job"><strong>Step by Step: Creating a Freestyle Job</strong></h3>
<h4 id="heading-1-go-to-new-item"><strong>1. Go to “New Item”</strong></h4>
<p>From the Jenkins dashboard, click <strong>New Item</strong> on the left panel.</p>
<h4 id="heading-2-enter-a-job-name"><strong>2. Enter a Job Name</strong></h4>
<p>Provide a meaningful name for the job, for example:<br /><strong>First-Jenkins-Job</strong></p>
<h4 id="heading-3-select-freestyle-project"><strong>3. Select “Freestyle Project”</strong></h4>
<p>Choose the <strong>Freestyle Project</strong> option and click <strong>OK</strong>.</p>
<h3 id="heading-configuring-the-job"><strong>Configuring the Job</strong></h3>
<h4 id="heading-1-general-settings"><strong>1. General Settings</strong></h4>
<p>Add a short description if needed.<br />This helps identify the job’s purpose.</p>
<h4 id="heading-2-source-code-management-optional"><strong>2. Source Code Management (Optional)</strong></h4>
<p>If you want Jenkins to pull code from GitHub or any repository, enter the repository URL here.<br />This step is optional for a simple test job.</p>
<h4 id="heading-3-build-steps"><strong>3. Build Steps</strong></h4>
<p>Scroll down to the <strong>Build</strong> section and click <strong>Add build step</strong> → <strong>Execute shell</strong> (or Execute Windows batch command).</p>
<p>Add a simple command such as:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello from Jenkins"</span>
</code></pre>
<p>This verifies that Jenkins can run shell commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765013232463/bee2d28b-2f4f-4191-855e-1488eb325c5a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-running-the-job"><strong>Running the Job</strong></h3>
<ol>
<li><p>Click <strong>Save</strong>.</p>
</li>
<li><p>Click <strong>Build Now</strong> on the left side.</p>
</li>
<li><p>A new build will appear in the <strong>Build History</strong> section.</p>
</li>
</ol>
<h3 id="heading-viewing-build-output"><strong>Viewing Build Output</strong></h3>
<p>Click on the build number (e.g., <strong>#1</strong>).<br />Then click <strong>Console Output</strong> to view:</p>
<ul>
<li><p>The executed commands</p>
</li>
<li><p>The output generated</p>
</li>
<li><p>Success/failure status</p>
</li>
</ul>
<p>If everything is configured correctly, the console will display:</p>
<pre><code class="lang-bash">Hello from Jenkins
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765013382617/bfc03e35-eaa5-48c3-9557-29fc2b1da815.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-jenkins-pipeline-basics"><strong>Jenkins Pipeline Basics</strong></h2>
<p>Jenkins Pipelines allow you to define your entire build, test, and deployment process as code. This approach is more flexible and maintainable than traditional Freestyle projects.</p>
<h3 id="heading-what-is-a-jenkins-pipeline"><strong>What is a Jenkins Pipeline?</strong></h3>
<p>A Jenkins Pipeline is a set of automated steps written as code that defines how your application should be built, tested, and deployed.</p>
<p>Pipelines are written using a file called the <strong>Jenkinsfile</strong>, which is stored inside the project’s repository. This makes your CI/CD process version controlled and portable.</p>
<h3 id="heading-types-of-jenkins-pipelines"><strong>Types of Jenkins Pipelines</strong></h3>
<h4 id="heading-1-declarative-pipeline"><strong>1. Declarative Pipeline</strong></h4>
<ul>
<li><p>More structured and beginner friendly</p>
</li>
<li><p>Uses a predefined syntax</p>
</li>
<li><p>Recommended for most use cases</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-bash">pipeline {
    agent any
    stages {
        stage(<span class="hljs-string">'Build'</span>) {
            steps {
                <span class="hljs-built_in">echo</span> <span class="hljs-string">"Building the application"</span>
            }
        }
        stage(<span class="hljs-string">'Test'</span>) {
            steps {
                <span class="hljs-built_in">echo</span> <span class="hljs-string">"Running tests"</span>
            }
        }
    }
}
</code></pre>
<h4 id="heading-2-scripted-pipeline"><strong>2. Scripted Pipeline</strong></h4>
<ul>
<li><p>More flexible</p>
</li>
<li><p>Uses full Groovy scripting</p>
</li>
<li><p>Preferred for advanced automation</p>
</li>
<li><p>Not required for beginners</p>
</li>
</ul>
<h3 id="heading-key-components-of-a-pipeline"><strong>Key Components of a Pipeline</strong></h3>
<p><strong>1. Agent</strong><br />Specifies where the pipeline will run (any agent, specific node, or Docker container).</p>
<p><strong>2. Stages</strong><br />Represents major phases of the pipeline such as build, test, deploy.</p>
<p><strong>3. Steps</strong><br />Commands executed inside each stage, for example:</p>
<ul>
<li><p>Shell commands</p>
</li>
<li><p>Scripts</p>
</li>
<li><p>Tool executions</p>
</li>
</ul>
<p><strong>4. Post Section</strong><br />Defines actions that should run after the pipeline completes (success or failure).</p>
<h3 id="heading-why-pipelines-matter"><strong>Why Pipelines Matter</strong></h3>
<ul>
<li><p>The pipeline is stored as code inside the repository</p>
</li>
<li><p>Easier to review, update, and maintain</p>
</li>
<li><p>Supports complex workflows</p>
</li>
<li><p>Reliable and repeatable builds</p>
</li>
<li><p>Essential for real-world CI/CD setups</p>
</li>
</ul>
<h2 id="heading-integrating-jenkins-with-gitgithub"><strong>Integrating Jenkins with Git/GitHub</strong></h2>
<p>Integrating Jenkins with Git or GitHub allows Jenkins to automatically pull the latest code changes and build the project whenever updates are made. This integration is a core part of Continuous Integration (CI).</p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<p>Before integrating:</p>
<ol>
<li><p>Jenkins must have the <strong>Git plugin</strong> installed.</p>
</li>
<li><p>Git should be installed on the Jenkins agent (or controller if jobs run there).</p>
</li>
<li><p>You should have a GitHub repository URL available.</p>
</li>
</ol>
<h3 id="heading-configuring-git-integration-in-a-jenkins-job"><strong>Configuring Git Integration in a Jenkins Job</strong></h3>
<h4 id="heading-1-open-your-jenkins-job"><strong>1. Open Your Jenkins Job</strong></h4>
<p>Go to the job where you want to set up Git integration, or create a new job.</p>
<h4 id="heading-2-navigate-to-source-code-management"><strong>2. Navigate to “Source Code Management”</strong></h4>
<p>Inside the job configuration page, scroll to the <strong>Source Code Management</strong> section and select <strong>Git</strong>.</p>
<h4 id="heading-3-add-repository-url"><strong>3. Add Repository URL</strong></h4>
<p>Paste your GitHub repository URL, for example:</p>
<pre><code class="lang-bash">https://github.com/username/repository.git
</code></pre>
<p>If the repository is private, you will need to add credentials from Jenkins → Credentials.</p>
<h4 id="heading-4-specify-branch"><strong>4. Specify Branch</strong></h4>
<p>Specify the branch Jenkins should use, typically:</p>
<pre><code class="lang-bash">*/main
</code></pre>
<p>or</p>
<pre><code class="lang-bash">*/master
</code></pre>
<h3 id="heading-setting-up-webhooks-optional-but-recommended"><strong>Setting Up Webhooks (Optional but Recommended)</strong></h3>
<p>Webhooks allow GitHub to notify Jenkins automatically when code is pushed.</p>
<h4 id="heading-steps">Steps<strong>:</strong></h4>
<ol>
<li><p>Open your GitHub repository.</p>
</li>
<li><p>Go to <strong>Settings</strong> → <strong>Webhooks</strong> → <strong>Add webhook</strong>.</p>
</li>
<li><p>Add your Jenkins webhook URL:</p>
</li>
</ol>
<pre><code class="lang-bash">http://&lt;jenkins-server-url&gt;/github-webhook/
</code></pre>
<ol start="4">
<li><p>Select <strong>Just the push event</strong>.</p>
</li>
<li><p>Save the webhook.</p>
</li>
</ol>
<p>Now, every time code is pushed, Jenkins gets notified immediately.</p>
<h3 id="heading-verifying-the-integration"><strong>Verifying the Integration</strong></h3>
<p>After you configure Git:</p>
<ol>
<li><p>Trigger a <strong>Build Now</strong> from Jenkins.</p>
</li>
<li><p>Jenkins will clone the repository.</p>
</li>
<li><p>You can view the Git commands and output inside the <strong>Console Output</strong>.</p>
</li>
</ol>
<p>This integration ensures your build pipeline always uses the latest code and supports fully automated CI workflows.</p>
<h2 id="heading-build-triggers"><strong>Build Triggers</strong></h2>
<p>Build triggers in Jenkins define when and how a job should start automatically. Instead of manually clicking “Build Now,” you can instruct Jenkins to run jobs based on specific conditions or events. This is an essential part of CI/CD automation.</p>
<h3 id="heading-common-types-of-build-triggers"><strong>Common Types of Build Triggers</strong></h3>
<h3 id="heading-1-trigger-builds-remotely-or-manually"><strong>1. Trigger Builds Remotely or Manually</strong></h3>
<p>This is the simplest method where you manually start a build by clicking <strong>Build Now</strong>.<br />Useful for testing or on demand tasks.</p>
<h3 id="heading-2-build-after-a-code-push-github-webhook-trigger"><strong>2. Build After a Code Push (GitHub Webhook Trigger)</strong></h3>
<p>Using GitHub webhooks, Jenkins can automatically start a job whenever code is pushed to the repository.</p>
<p>Process:</p>
<ul>
<li><p>Developer pushes code</p>
</li>
<li><p>GitHub sends a notification to Jenkins</p>
</li>
<li><p>Jenkins pulls the latest code and runs the job</p>
</li>
</ul>
<p>This is widely used in Continuous Integration pipelines.</p>
<h3 id="heading-3-poll-scm-source-code-management"><strong>3. Poll SCM (Source Code Management)</strong></h3>
<p>Jenkins periodically checks the repository for any changes. If it detects a change, it triggers a build.</p>
<p>Example schedule (every 5 minutes):</p>
<pre><code class="lang-bash">H/5 * * * *
</code></pre>
<p>This option does not require webhooks, but it is less efficient since Jenkins actively checks the repository.</p>
<h3 id="heading-4-scheduled-builds-cron-jobs"><strong>4. Scheduled Builds (CRON Jobs)</strong></h3>
<p>You can schedule jobs using CRON syntax.</p>
<p>Examples:</p>
<ul>
<li><p>Every day at midnight:</p>
<pre><code class="lang-bash">  0 0 * * *
</code></pre>
</li>
<li><p>Every 15 minutes:</p>
<pre><code class="lang-bash">  H/15 * * * *
</code></pre>
</li>
</ul>
<p>Useful for periodic tasks like backups, scans, cleanup jobs, or nightly builds.</p>
<h3 id="heading-5-trigger-by-upstreamdownstream-projects"><strong>5. Trigger by Upstream/Downstream Projects</strong></h3>
<p>A job can be configured to run automatically after another job finishes.</p>
<p>Example:</p>
<ul>
<li><p>Job A builds the code</p>
</li>
<li><p>Job B automatically deploys it after Job A succeeds</p>
</li>
</ul>
<p>This helps create multi-step pipelines.</p>
<h3 id="heading-why-build-triggers-are-important"><strong>Why Build Triggers Are Important</strong></h3>
<ul>
<li><p>Reduce manual work</p>
</li>
<li><p>Ensure faster feedback to developers</p>
</li>
<li><p>Maintain continuous and automated workflows</p>
</li>
<li><p>Enable reliable CI/CD processes</p>
</li>
</ul>
<h2 id="heading-important-jenkins-plugins"><strong>Important Jenkins Plugins</strong></h2>
<p>Jenkins’ power comes largely from its <strong>extensive plugin ecosystem</strong>. Plugins extend Jenkins’ capabilities, allowing integration with various tools, environments, and workflows. As a fresher, knowing the most commonly used plugins is sufficient to get started.</p>
<h3 id="heading-1-git-plugin"><strong>1. Git Plugin</strong></h3>
<ul>
<li><p>Enables Jenkins to interact with Git repositories.</p>
</li>
<li><p>Supports cloning, pulling, and managing branches.</p>
</li>
<li><p>Essential for CI/CD pipelines that depend on version control.</p>
</li>
</ul>
<h3 id="heading-2-pipeline-plugin"><strong>2. Pipeline Plugin</strong></h3>
<ul>
<li><p>Provides support for Jenkins Pipeline as code (Jenkinsfile).</p>
</li>
<li><p>Allows defining build, test, and deployment stages in a structured way.</p>
</li>
<li><p>Necessary for creating Declarative and Scripted pipelines.</p>
</li>
</ul>
<h3 id="heading-3-github-integration-plugin"><strong>3. GitHub Integration Plugin</strong></h3>
<ul>
<li><p>Simplifies integration with GitHub repositories.</p>
</li>
<li><p>Supports webhooks for automatic build triggers on code pushes.</p>
</li>
<li><p>Provides status reporting back to GitHub.</p>
</li>
</ul>
<h3 id="heading-4-docker-plugin-optional-for-beginners"><strong>4. Docker Plugin (Optional for Beginners)</strong></h3>
<ul>
<li><p>Enables Jenkins to build and run Docker containers.</p>
</li>
<li><p>Useful for containerized applications and DevOps workflows.</p>
</li>
</ul>
<h3 id="heading-5-email-extension-plugin"><strong>5. Email Extension Plugin</strong></h3>
<ul>
<li><p>Allows sending email notifications based on build results.</p>
</li>
<li><p>Can be configured to alert developers in case of build failures.</p>
</li>
</ul>
<h3 id="heading-6-slack-notification-plugin-optional"><strong>6. Slack Notification Plugin (Optional)</strong></h3>
<ul>
<li><p>Sends build notifications to Slack channels.</p>
</li>
<li><p>Useful for team collaboration and monitoring build status.</p>
</li>
</ul>
<h3 id="heading-why-plugins-matter"><strong>Why Plugins Matter</strong></h3>
<ul>
<li><p>Extend Jenkins’ functionality to fit your project needs</p>
</li>
<li><p>Simplify integration with other tools</p>
</li>
<li><p>Enable automation beyond basic builds</p>
</li>
<li><p>Make Jenkins suitable for real-world CI/CD pipelines</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Getting Started With Shell Scripting for DevOps]]></title><description><![CDATA[Introduction
What is shell scripting?
Shell scripting is writing a sequence of commands for the command-line shell (most commonly Bash on Linux) into a text file so they run automatically. Instead of typing commands one by one, you save them as a scr...]]></description><link>https://blog.sushant.dev/getting-started-with-shell-scripting-for-devops</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-shell-scripting-for-devops</guid><category><![CDATA[shell script]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Fri, 05 Dec 2025 06:43:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764916905352/e68a8a2f-744a-4d4e-94ac-fdb167072e37.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<h2 id="heading-what-is-shell-scripting">What is shell scripting?</h2>
<p>Shell scripting is writing a sequence of commands for the command-line shell (most commonly <strong>Bash</strong> on Linux) into a text file so they run automatically. Instead of typing commands one by one, you save them as a script (e.g., <a target="_blank" href="http://backup.sh"><code>hello.sh</code></a>) and execute the file and then the shell reads and runs each command in order.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># hello.sh - a simple shell script</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, world!"</span>
</code></pre>
<p>Make it executable with <code>chmod +x</code> <a target="_blank" href="http://hello.sh"><code>hello.sh</code></a> and run <code>./</code><a target="_blank" href="http://hello.sh"><code>hello.sh</code></a>. That file is a shell script.</p>
<h2 id="heading-why-devops-engineers-must-know-it">Why DevOps engineers must know it</h2>
<p>DevOps engineers work with servers, deployments and automation every day. Shell scripting makes this work faster and easier. It helps in automating tasks like taking backups, checking system health, installing packages or monitoring logs. Shell scripts run on almost every Linux server, so you do not need to install anything extra. They are also used inside CI/CD pipelines such as Jenkins, GitHub Actions and GitLab CI to run commands during build and deployment.</p>
<p>In short, shell scripting saves time, reduces errors and helps DevOps engineers automate their daily tasks efficiently.</p>
<h1 id="heading-basics-you-must-know">Basics You Must Know</h1>
<h2 id="heading-what-is-a-shell">What is a shell</h2>
<p>A shell is a program that takes the commands you type and tells the operating system to run them.<br />The most commonly used shells in Linux are<br />• bash (Bourne Again Shell)<br />• sh (Bourne Shell)</p>
<p>When people talk about shell scripting, they usually mean writing scripts for bash.</p>
<h2 id="heading-creating-and-running-a-script">Creating and running a script</h2>
<p>A shell script is simply a text file that contains a list of commands.</p>
<p>Steps to create and run your first script</p>
<ol>
<li>Create a new file</li>
</ol>
<pre><code class="lang-bash">nano hello.sh
</code></pre>
<ol start="2">
<li>Add some commands</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello from shell script"</span>
</code></pre>
<ol start="3">
<li><p>Save and exit</p>
</li>
<li><p>Make the script executable using chmod +x</p>
</li>
</ol>
<pre><code class="lang-bash">chmod +x hello.sh
</code></pre>
<ol start="5">
<li>Run the script</li>
</ol>
<pre><code class="lang-bash">./hello.sh
</code></pre>
<h2 id="heading-what-is-chmod-x">What is chmod +x</h2>
<p>chmod is used to change file permissions.<br />+x means giving the file permission to be executed like a program.<br />Without this permission, the shell will not allow you to run the script directly.</p>
<h2 id="heading-what-is-a-shebang">What is a Shebang</h2>
<p>A shebang is the first line written at the top of a shell script.</p>
<p>Example</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
</code></pre>
<p>This line tells the system which interpreter should run the script.<br />In this case, it is the bash shell.</p>
<h1 id="heading-everyday-bash-commands-used-in-scripts">Everyday Bash Commands Used in Scripts</h1>
<p>Shell scripts mainly run Linux commands. These are the basic commands you will use almost every day.</p>
<h2 id="heading-file-operations">File operations</h2>
<p>These commands help you create, copy, move or delete files and folders.</p>
<p><strong>Create a folder</strong></p>
<pre><code class="lang-bash">mkdir project
</code></pre>
<p><strong>Copy a file</strong></p>
<pre><code class="lang-bash">cp file1.txt backup.txt
</code></pre>
<p><strong>Remove a file</strong></p>
<pre><code class="lang-bash">rm oldfile.txt
</code></pre>
<hr />
<h2 id="heading-viewing-files">Viewing files</h2>
<p>These commands help you read or search inside files.</p>
<p><strong>Show the full content of a file</strong></p>
<pre><code class="lang-bash">cat log.txt
</code></pre>
<p><strong>Scroll through a file slowly</strong></p>
<pre><code class="lang-bash">less log.txt
</code></pre>
<p><strong>Search for a word inside a file</strong></p>
<pre><code class="lang-bash">grep error log.txt
</code></pre>
<h2 id="heading-pipes-and-redirection">Pipes and redirection</h2>
<p>Pipes and redirection allow you to connect commands and control where output goes.</p>
<p><strong>Pipe symbol (|)</strong><br />Takes the output of one command and sends it to another.<br />Example</p>
<pre><code class="lang-bash">cat log.txt | grep error
</code></pre>
<p>This shows only the lines containing the word “error”.</p>
<p><strong>Single arrow (&gt;)</strong><br />Sends the output to a new file. If the file already exists, it will be replaced.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Backup complete"</span> &gt; status.txt
</code></pre>
<p><strong>Double arrow (&gt;&gt;)</strong><br />Adds output at the end of an existing file.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"New entry added"</span> &gt;&gt; status.txt
</code></pre>
<h1 id="heading-variables-and-inputs">Variables and Inputs</h1>
<p>Shell scripts often store information in variables and take input from the user. This makes scripts flexible and interactive.</p>
<h2 id="heading-creating-variables">Creating variables</h2>
<p>A variable stores a value such as text, numbers or file names.</p>
<p>Example</p>
<pre><code class="lang-bash">name=<span class="hljs-string">"Sushant"</span>
age=22
</code></pre>
<p>Important<br />There should be no spaces around the equal sign.</p>
<h2 id="heading-using-a-variable">Using a variable</h2>
<p>To use the stored value, you put a dollar sign before the variable name.</p>
<p>Example</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-variable">$name</span>
<span class="hljs-built_in">echo</span> <span class="hljs-variable">$age</span>
</code></pre>
<p>This will print the values stored in the variables.</p>
<h2 id="heading-reading-user-input">Reading user input</h2>
<p>The read command allows your script to take input from the user while it is running.</p>
<p>Example</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Enter your name"</span>
<span class="hljs-built_in">read</span> username

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello <span class="hljs-variable">$username</span>"</span>
</code></pre>
<p>In this script<br />• The script asks the user to enter a name<br />• The read command stores the input inside the username variable<br />• The script prints a message using that variable</p>
<h1 id="heading-conditions-if-else">Conditions (If Else)</h1>
<p>Conditions help scripts make decisions. They allow the script to run different commands based on different situations.</p>
<h2 id="heading-basic-syntax">Basic syntax</h2>
<p>This is the basic structure of an if else statement in bash.</p>
<pre><code class="lang-bash"><span class="hljs-keyword">if</span> [ condition ]
<span class="hljs-keyword">then</span>
    commands
<span class="hljs-keyword">else</span>
    commands
<span class="hljs-keyword">fi</span>
</code></pre>
<p>Example</p>
<pre><code class="lang-bash">num=10

<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$num</span> -gt 5 ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Number is greater than 5"</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Number is not greater than 5"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<h2 id="heading-checking-files">Checking files</h2>
<p>You can check if a file or folder exists before running a command.</p>
<p><strong>Check if a file exists</strong></p>
<pre><code class="lang-bash"><span class="hljs-keyword">if</span> [ -f myfile.txt ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"File found"</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"File not found"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<p><strong>Check if a directory exists</strong></p>
<pre><code class="lang-bash"><span class="hljs-keyword">if</span> [ -d myfolder ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Directory exists"</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Directory does not exist"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<h2 id="heading-checking-numbers">Checking numbers</h2>
<p>Bash uses special operators to compare numbers.</p>
<p><strong>Equal to</strong></p>
<pre><code class="lang-bash">[ <span class="hljs-variable">$a</span> -eq <span class="hljs-variable">$b</span> ]
</code></pre>
<p><strong>Not equal to</strong></p>
<pre><code class="lang-bash">[ <span class="hljs-variable">$a</span> -ne <span class="hljs-variable">$b</span> ]
</code></pre>
<p><strong>Greater than</strong></p>
<pre><code class="lang-bash">[ <span class="hljs-variable">$a</span> -gt <span class="hljs-variable">$b</span> ]
</code></pre>
<p><strong>Less than</strong></p>
<pre><code class="lang-bash">[ <span class="hljs-variable">$a</span> -lt <span class="hljs-variable">$b</span> ]
</code></pre>
<p>Example</p>
<pre><code class="lang-bash">marks=75

<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$marks</span> -ge 60 ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Pass"</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Fail"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<h1 id="heading-loops">Loops</h1>
<p>Loops allow you to run a command multiple times. They are very useful in automation and DevOps.</p>
<h2 id="heading-for-loop">for loop</h2>
<p>A for loop runs a command for each item in a list.</p>
<p>Example</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> 1 2 3 4 5
<span class="hljs-keyword">do</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Number is <span class="hljs-variable">$i</span>"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>You can also loop through files</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> file <span class="hljs-keyword">in</span> *.txt
<span class="hljs-keyword">do</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Found file: <span class="hljs-variable">$file</span>"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<h2 id="heading-while-loop">while loop</h2>
<p>A while loop keeps running as long as the condition is true.</p>
<p>Example</p>
<pre><code class="lang-bash">count=1

<span class="hljs-keyword">while</span> [ <span class="hljs-variable">$count</span> -le 5 ]
<span class="hljs-keyword">do</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Count is <span class="hljs-variable">$count</span>"</span>
    count=$((count + <span class="hljs-number">1</span>))
<span class="hljs-keyword">done</span>
</code></pre>
<h2 id="heading-simple-devops-example-looping-over-log-files">Simple DevOps example: looping over log files</h2>
<p>This example checks all <code>.log</code> files inside a logs folder.</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> <span class="hljs-built_in">log</span> <span class="hljs-keyword">in</span> logs/*.<span class="hljs-built_in">log</span>
<span class="hljs-keyword">do</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Checking file: <span class="hljs-variable">$log</span>"</span>
    grep <span class="hljs-string">"error"</span> <span class="hljs-variable">$log</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>This script<br />• Goes through every log file<br />• Prints its name<br />• Searches for the word "error" inside it</p>
<p>Useful when you want to quickly scan multiple log files on a server.</p>
<h1 id="heading-functions">Functions</h1>
<p>Functions help you group a set of commands together and reuse them whenever needed. This makes your scripts cleaner and easier to maintain.</p>
<h2 id="heading-creating-simple-functions">Creating simple functions</h2>
<p>A function is defined once and can be called many times.</p>
<p>Example</p>
<pre><code class="lang-bash"><span class="hljs-function"><span class="hljs-title">greet</span></span>() {
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello, welcome to the script"</span>
}

greet
greet
</code></pre>
<p>The script will print the message two times because the function is called two times.</p>
<p>Another example with parameters</p>
<pre><code class="lang-bash"><span class="hljs-function"><span class="hljs-title">show_name</span></span>() {
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Your name is <span class="hljs-variable">$1</span>"</span>
}

show_name <span class="hljs-string">"Sushant"</span>
</code></pre>
<p>Here, <code>$1</code> represents the first argument passed to the function.</p>
<h2 id="heading-why-functions-matter-in-automation">Why functions matter in automation</h2>
<p>Functions are very useful in DevOps and automation because</p>
<p>• They help avoid repeating code<br />• They make scripts shorter and easier to read<br />• They allow you to build reusable tasks like backup, clean up or health checks<br />• They make large scripts more organized and modular</p>
<p>Example use cases<br />• A function for checking CPU usage<br />• A function for taking backups<br />• A function for sending alerts</p>
<p>Functions turn a long script into small, manageable pieces that are easier to maintain.</p>
<h1 id="heading-automation-with-cron-jobs">Automation With Cron Jobs</h1>
<h2 id="heading-what-is-cron">What is cron</h2>
<p>Cron is a time based scheduler in Linux.<br />It allows you to run scripts automatically at fixed times, such as every minute, every hour, or every day.</p>
<p>Cron is very useful for DevOps engineers because it helps automate tasks without manual effort.</p>
<p>Examples of tasks cron can automate<br />• Backups<br />• Log cleanup<br />• Health checks<br />• Sending reports</p>
<h2 id="heading-scheduling-a-task">Scheduling a task</h2>
<p>Cron jobs are stored in a file called the crontab.</p>
<p>To edit the crontab, use</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>Cron job format</p>
<pre><code class="lang-bash">* * * * * <span class="hljs-built_in">command</span>
</code></pre>
<p>The five stars represent</p>
<ol>
<li><p>Minute</p>
</li>
<li><p>Hour</p>
</li>
<li><p>Day of month</p>
</li>
<li><p>Month</p>
</li>
<li><p>Day of week</p>
</li>
</ol>
<p>Example<br />Run a script every day at 3 AM</p>
<pre><code class="lang-bash">0 3 * * * /home/user/backup.sh
</code></pre>
<h2 id="heading-example-daily-backup-script">Example: daily backup script</h2>
<p><strong>Step 1: Create a backup script</strong></p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

cp -r /home/user/data /home/user/backup_folder
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Backup completed at <span class="hljs-subst">$(date)</span>"</span>
</code></pre>
<p>Save it as <a target="_blank" href="http://backup.sh"><code>backup.sh</code></a> and make it executable</p>
<pre><code class="lang-bash">chmod +x backup.sh
</code></pre>
<p><strong>Step 2: Schedule it with cron</strong></p>
<pre><code class="lang-bash">0 2 * * * /home/user/backup.sh
</code></pre>
<p>This will run the backup script every day at 2 AM.</p>
<h1 id="heading-real-devops-examples">Real DevOps Examples</h1>
<p>These are small scripts that DevOps engineers commonly use in real work. Each example is short, practical and easy to understand.</p>
<h2 id="heading-install-packages">Install packages</h2>
<p>This script installs a list of packages on a Linux server.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

packages=<span class="hljs-string">"nginx git curl"</span>

<span class="hljs-keyword">for</span> pkg <span class="hljs-keyword">in</span> <span class="hljs-variable">$packages</span>
<span class="hljs-keyword">do</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Installing <span class="hljs-variable">$pkg</span>"</span>
    sudo apt-get install -y <span class="hljs-variable">$pkg</span>
<span class="hljs-keyword">done</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"All packages installed"</span>
</code></pre>
<p>This is useful for setting up a new server quickly.</p>
<h2 id="heading-backup-a-folder">Backup a folder</h2>
<p>This script creates a backup with the current date in the file name.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

source_folder=<span class="hljs-string">"/home/user/data"</span>
backup_folder=<span class="hljs-string">"/home/user/backups"</span>
date=$(date +%Y-%m-%d)

cp -r <span class="hljs-variable">$source_folder</span> <span class="hljs-variable">$backup_folder</span>/data-<span class="hljs-variable">$date</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Backup completed"</span>
</code></pre>
<p>A simple script like this can be connected to cron for automatic backups.</p>
<h2 id="heading-monitor-cpu-or-memory">Monitor CPU or memory</h2>
<p>This script checks CPU usage and prints a warning if it is high.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

cpu=$(top -bn1 | grep <span class="hljs-string">"Cpu(s)"</span> | awk <span class="hljs-string">'{print $2 + $4}'</span>)

<span class="hljs-built_in">echo</span> <span class="hljs-string">"CPU Usage: <span class="hljs-variable">$cpu</span>"</span>

<span class="hljs-keyword">if</span> (( $(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$cpu</span> &gt; 80"</span> | bc -l) ))
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Warning: High CPU usage"</span>
<span class="hljs-keyword">fi</span>
</code></pre>
<p>You can use a similar script to monitor memory, disk usage or logs on a server.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Shell scripting is one of the most important skills for anyone working in DevOps. It helps automate everyday tasks, makes server management faster, and reduces manual work. With simple scripts, you can take backups, install packages, monitor system health and manage logs without doing the same steps again and again. Once you understand the basics, you can build powerful automation that saves time and avoids human errors.</p>
]]></content:encoded></item><item><title><![CDATA[TimescaleDB]]></title><description><![CDATA[Introduction
TimescaleDB is an open-source database that's specifically designed for time-series data, which is data that is recorded over time, like sensor readings, stock prices, or application metrics. The simplest way to think about it is as a sp...]]></description><link>https://blog.sushant.dev/timescaledb</link><guid isPermaLink="true">https://blog.sushant.dev/timescaledb</guid><category><![CDATA[fasterquery]]></category><category><![CDATA[timescaledb]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[timeseries]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sun, 03 Aug 2025 09:06:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754211908800/040a3592-679b-4628-af06-efb24fe7e187.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>TimescaleDB is an open-source database that's specifically designed for <strong>time-series data</strong>, which is data that is recorded over time, like sensor readings, stock prices, or application metrics. The simplest way to think about it is as a specialized upgrade for <strong>PostgreSQL</strong>.</p>
<p>Instead of being a completely new database, TimescaleDB is an <strong>extension</strong> that you add to a standard PostgreSQL database. This is a huge benefit because it means you get to keep all the features you love about PostgreSQL like SQL, reliability, and support for JSON data while also getting powerful new features for time-series data.</p>
<h3 id="heading-a-simple-example">A Simple Example</h3>
<p>Let's say you're tracking the temperature in your house every minute.</p>
<ul>
<li><p><strong>Without TimescaleDB</strong>, all of your data would go into one huge table. Over a year, this table would have hundreds of thousands of entries, making it slow to query.</p>
</li>
<li><p><strong>With TimescaleDB</strong>, you create a hypertable. It automatically organizes your data. So, all your January data is in one chunk, February data in another, and so on. When you ask for the average temperature in February, TimescaleDB only has to look at the February chunk, which is much faster.</p>
</li>
</ul>
<p>This simple yet powerful approach allows you to handle massive amounts of time-stamped data without sacrificing the reliability and flexibility of PostgreSQL.</p>
<h2 id="heading-traditional-postgresql-vs-time-series-needs">Traditional PostgreSQL vs. Time-Series Needs</h2>
<p>If you've ever worked with a relational database, chances are you've used PostgreSQL. It's a fantastic, general-purpose database known for its reliability and rich feature set. But what happens when you try to use it for a very specific type of data: <strong>time-series data</strong>?</p>
<p>While traditional PostgreSQL excels at many things, it's not built for the unique demands of time-stamped information. Understanding this difference is the first step to choosing the right tool for the job.</p>
<h3 id="heading-typical-oltp-patterns-vs-time-series-workloads">Typical OLTP Patterns vs. Time-Series Workloads</h3>
<p>At its core, a standard PostgreSQL database is optimized for <strong>Online Transaction Processing (OLTP)</strong>. Think of a typical e-commerce site:</p>
<ul>
<li><p><strong>Frequent changes</strong>: Products are updated, orders are placed, and customer profiles are modified.</p>
</li>
<li><p><strong>Small, specific queries</strong>: A query might ask for a single customer's order history or the current stock level of one product.</p>
</li>
<li><p><strong>Balance of operations</strong>: There's a mix of reading, writing, updating, and deleting data.</p>
</li>
</ul>
<p>Time-series data, on the other hand, has a very different rhythm. Imagine data coming from a fleet of IoT sensors, financial market data, or website performance metrics:</p>
<ul>
<li><p><strong>Append-only</strong>: You are almost always adding new data. Older data is rarely, if ever, changed.</p>
</li>
<li><p><strong>Aggregate-heavy</strong>: Queries often ask for trends and summaries over large time periods, such as "What was the average CPU usage for the last month?" or "How many unique visitors did the website have each day in Q1?"</p>
</li>
<li><p><strong>Data volume</strong>: The amount of data grows relentlessly and can quickly become massive, with new data points arriving every second or minute.</p>
</li>
</ul>
<p>When you try to force this append-only, aggregate-heavy workload onto a standard PostgreSQL setup, you'll inevitably hit some performance roadblocks.</p>
<h3 id="heading-the-pain-points-of-using-vanilla-postgres-at-scale">The Pain Points of Using Vanilla Postgres at Scale</h3>
<p>Traditional PostgreSQL struggles with large-scale time-series data because it's built for frequent data changes, not for continuous, append-only data streams. This mismatch leads to three key issues:</p>
<ul>
<li><p><strong>Index Bloat</strong>: Constant data insertions cause indexes to become inefficient and oversized, slowing down queries and wasting disk space.</p>
</li>
<li><p><strong>Vacuum Overhead</strong>: The database's cleanup process (<code>VACUUM</code>) has to work overtime to manage the massive influx of new rows, consuming significant system resources and degrading performance.</p>
</li>
<li><p><strong>Slow Aggregates</strong>: Queries that summarize data over long periods become very slow because the database must scan a single, massive table, which is not optimized for this type of query.</p>
</li>
</ul>
<h2 id="heading-the-magic-of-timescaledbs-architecture">The Magic of TimescaleDB's Architecture</h2>
<p>So, how does TimescaleDB pull off its time-series wizardry? The secret lies in its elegant architecture, built upon the fundamental concept of the <strong>hypertable</strong>.</p>
<h4 id="heading-the-hypertable-your-window-to-time-partitioned-data">The Hypertable: Your Window to Time-Partitioned Data</h4>
<p>Imagine you have an ever-growing stream of data. Instead of letting it pile up in one enormous table, what if you could automatically organize it into neat, time-based compartments? That's essentially what a <strong>hypertable</strong> does.</p>
<p>Think of a hypertable as a single, logical table that you interact with just like any other PostgreSQL table. You create it, you insert data into it, and you query it. The beauty, however, lies beneath the surface.</p>
<p>TimescaleDB automatically <strong>partitions</strong> the hypertable's data based on time. You specify a time column (like a timestamp), and TimescaleDB will then divide your data into smaller, physical tables called <strong>chunks</strong>, often based on time intervals like days, weeks, or months. You can even add optional <strong>space partitioning</strong> based on another column, such as a sensor ID or location, for further organization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754207455131/d28482d6-29ef-41dd-8e62-e64d28319fd4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-under-the-hood-chunks-smart-planning-and-query-speed">Under the Hood: Chunks, Smart Planning, and Query Speed</h2>
<p>Here's where the real magic happens:</p>
<ul>
<li><p><strong>Chunks: The Building Blocks</strong>: These individual chunks are the actual physical tables storing your data. By keeping them smaller and time-bound, TimescaleDB ensures that most queries only need to scan a fraction of your total data. This is a massive performance win compared to scanning one gigantic table.</p>
</li>
<li><p><strong>Distributed Query Planning</strong>: When you send a query to your hypertable, TimescaleDB's intelligent query planner kicks in. It understands how your data is organized into chunks and figures out <em>exactly</em> which chunks contain the data relevant to your query.</p>
</li>
<li><p><strong>Query Pruning: Cutting Through the Noise</strong>: This is the superpower that makes TimescaleDB so fast for time-series analysis. Based on the time range (and any space partitioning criteria) in your query, the planner <strong>prunes</strong> (or eliminates) the chunks that don't contain the data you need. It's like having a librarian who knows exactly which shelf (chunk) to go to instead of searching the entire library.</p>
</li>
</ul>
<p><strong>Analogy:</strong></p>
<blockquote>
<p>Think of organizing your yearly financial records. Instead of one massive folder for everything, you might have separate folders for each month. If you need to find a receipt from July, you only need to open the July folder, not sift through the entire year's worth of documents. TimescaleDB does this automatically for your time-series data.</p>
</blockquote>
<h2 id="heading-from-plain-table-to-powerful-hypertable-a-simple-transformation">From Plain Table to Powerful Hypertable: A Simple Transformation</h2>
<p>Turning an ordinary PostgreSQL table into a time-series powerhouse is surprisingly straightforward. Here's a glimpse of the code:</p>
<p><strong>First, create your regular PostgreSQL table:</strong></p>
<pre><code class="lang-pgsql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> sensor_data ( <span class="hljs-type">time</span> <span class="hljs-type">TIMESTAMPTZ</span> <span class="hljs-keyword">NOT</span> <span class="hljs-keyword">NULL</span>, device_id <span class="hljs-type">TEXT</span>, temperature <span class="hljs-type">DOUBLE</span> <span class="hljs-type">PRECISION</span> );
</code></pre>
<p><strong>Then, with a single command, transform it into a hypertable:</strong></p>
<pre><code class="lang-pgsql"> <span class="hljs-keyword">SELECT</span> create_hypertable(<span class="hljs-string">'sensor_data'</span>, <span class="hljs-string">'time'</span>);
</code></pre>
<p>That's it! TimescaleDB now takes over the management of your <code>sensor_data</code> table, automatically partitioning new data into chunks based on the <code>time</code> column.</p>
<p>By abstracting the complexities of partitioning and intelligently routing queries, TimescaleDB's architecture provides a robust and highly performant foundation for handling the ever-increasing volumes of time-series data. In the next section, we'll explore some of the key benefits this architecture unlocks.</p>
<h2 id="heading-key-features">Key Features</h2>
<h4 id="heading-high-ingest-parallel-writes-keeping-up-with-the-flow"><strong>High-Ingest Parallel Writes: Keeping Up with the Flow</strong></h4>
<p>Time-series data often arrives in continuous streams, and the ability to ingest this data quickly and efficiently is crucial. TimescaleDB is engineered for <strong>high-ingest parallel writes</strong>.</p>
<p>Because data is automatically divided into chunks based on time, multiple write operations can occur simultaneously on different chunks. This parallelization significantly increases the write throughput, allowing your database to keep pace with even the most demanding data streams from numerous sensors, devices, or applications. This means you can handle a massive influx of data without creating bottlenecks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754210534134/9117544c-3130-4cbc-b0b3-097c6053ad0a.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-compression-shrinking-your-time-series-footprint">Compression: Shrinking Your Time-Series Footprint</h4>
<p>Time-series data tends to accumulate rapidly. Efficient storage is therefore paramount. TimescaleDB offers powerful <strong>compression</strong> techniques specifically designed for time-series data.</p>
<p>It achieves this by organizing data within chunks into <strong>columnar segments</strong>. This columnar layout allows for effective compression using methods like <strong>delta encoding</strong> (storing the difference from the previous value) and <strong>dictionary encoding</strong> (replacing frequently occurring values with smaller codes).</p>
<p>The result is a significant reduction in storage costs, often achieving compression ratios of 90% or more, especially for older, less frequently queried data. This allows you to retain historical data for longer periods without breaking the bank.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754210666736/eb0a146f-3d43-44b8-99fa-674b50321eff.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-continuous-aggregates-real-time-insights-without-the-wait"><strong>Continuous Aggregates: Real-time Insights Without the Wait</strong></h4>
<p>Running aggregate queries over vast datasets can be time-consuming. TimescaleDB introduces <strong>continuous aggregates</strong>, which are essentially <strong>incremental materialized views</strong> that automatically refresh in the background as new data arrives.</p>
<p>Instead of recalculating aggregations from scratch every time you need them, TimescaleDB incrementally updates these pre-computed views. This allows for near real-time analysis of aggregated data (like hourly averages, daily totals, etc.) with significantly lower query latency. You get up-to-the-minute insights without the performance hit of repeatedly querying the raw data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754210866232/f35e2ec4-10f0-43b8-83e5-8c9781e027e7.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-built-in-data-retention-and-automated-policies-managing-your-data-lifecycle"><strong>Built-in Data Retention and Automated Policies: Managing Your Data Lifecycle</strong></h4>
<p>Managing the lifecycle of time-series data is crucial. Often, older data becomes less relevant or needs to be archived for compliance reasons. TimescaleDB provides <strong>built-in data retention policies</strong> that allow you to automatically remove data older than a specified time period.</p>
<p>You can define these policies directly within the database, and TimescaleDB will handle the data removal in the background, freeing up storage space and simplifying data management. This automation ensures that you comply with retention requirements without manual intervention.</p>
<h4 id="heading-time-bucket-gap-fill-and-advanced-analytics-functions-powerful-tools-for-analysis"><strong>Time-bucket, Gap-Fill, and Advanced Analytics Functions: Powerful Tools for Analysis</strong></h4>
<p>TimescaleDB extends SQL with a set of powerful functions specifically designed for time-series analysis:</p>
<ul>
<li><p><code>time_bucket()</code>: This function is essential for grouping data into regular time intervals (e.g., 5-minute buckets, hourly buckets), making it easy to perform aggregations over time.</p>
</li>
<li><p><strong>Gap-fill</strong>: When dealing with time-series data, missing data points are common. TimescaleDB offers functions to fill these gaps based on various strategies (e.g., linear interpolation, carrying forward the last known value).</p>
</li>
<li><p><strong>Advanced Analytics</strong>: Beyond basic aggregations, TimescaleDB provides functions for more sophisticated time-series analysis, such as first/last value within a group, time differences, and more, making complex analytical queries easier to write and execute efficiently.</p>
</li>
</ul>
<h4 id="heading-multi-node-distributed-option-scaling-horizontally-for-massive-scale"><strong>Multi-node / Distributed Option: Scaling Horizontally for Massive Scale</strong></h4>
<p>For truly massive time-series datasets and high-throughput requirements that exceed the capacity of a single server, TimescaleDB offers a <strong>multi-node / distributed option</strong>.</p>
<p>This allows you to distribute your hypertable data and query processing across multiple TimescaleDB instances, scaling horizontally to handle petabytes of data and incredibly high ingestion rates. This distributed architecture provides scalability and resilience for the most demanding time-series applications.</p>
<p>By combining these powerful features, TimescaleDB provides a comprehensive platform for collecting, storing, and analyzing time-series data at scale, unlocking valuable insights and enabling real-time decision-making.</p>
<h2 id="heading-hands-on-walkthrough">Hands-On Walkthrough</h2>
<p>Getting started with TimescaleDB is straightforward. You can easily install it as an extension on an existing PostgreSQL database.</p>
<ul>
<li><p><strong>Schema Design</strong>: The key is to choose the correct <strong>time column</strong> (the timestamp for your data) and, optionally, a second <strong>partition key</strong> (like <code>device_id</code>) to further organize your data.</p>
</li>
<li><p><strong>Ingest</strong>: Data can be ingested just like in regular PostgreSQL using <code>INSERT</code> statements or in bulk with the <code>COPY</code> command for higher performance.</p>
</li>
<li><p><strong>Queries</strong>: TimescaleDB's real power shows in its query performance. For example, to find the average temperature over the last 24 hours, you would use a query with <code>time_bucket</code> on a hypertable.</p>
</li>
</ul>
<h2 id="heading-performance-benchmarks">Performance Benchmarks</h2>
<p>TimescaleDB significantly outperforms plain PostgreSQL for time-series workloads. This is because its architecture is designed to minimize I/O and CPU usage.</p>
<ul>
<li><p><strong>Speedups</strong>: TimescaleDB achieves faster ingest and lower query latency by using <strong>chunk pruning</strong>, where the query planner only scans the relevant data chunks instead of the entire table.</p>
</li>
<li><p><strong>The "Why"</strong>: Its speed comes from reduced I/O, thanks to time-based partitioning, and efficient background workers that handle tasks like compression and data retention, leaving the main database free to process queries and writes.</p>
</li>
</ul>
<h2 id="heading-operational-considerations">Operational Considerations</h2>
<p>Managing TimescaleDB is similar to managing PostgreSQL, but with some time-series-specific considerations.</p>
<ul>
<li><p><strong>Backup and Restore</strong>: Standard PostgreSQL backup tools work, but TimescaleDB also offers specialized tools for physical backups of hypertables.</p>
</li>
<li><p><strong>Monitoring</strong>: You can use standard PostgreSQL metrics, but TimescaleDB also provides a Prometheus adapter for detailed time-series metrics.</p>
</li>
<li><p><strong>Resource Sizing</strong>: The main resources to consider are <strong>CPU, memory, and disk space</strong>. The amount of disk space needed will depend heavily on your data retention policy and compression settings.</p>
</li>
</ul>
<h2 id="heading-real-world-use-cases">Real-World Use Cases</h2>
<p>TimescaleDB is used across various industries to handle high-volume time-series data.</p>
<ul>
<li><p><strong>Examples</strong>: Common use cases include:</p>
<ul>
<li><p><strong>IoT sensors</strong> for collecting and analyzing data from devices.</p>
</li>
<li><p><strong>DevOps metrics</strong> for monitoring server performance.</p>
</li>
<li><p><strong>Financial tick data</strong> for analyzing market trends.</p>
</li>
</ul>
</li>
<li><p><strong>Business Context</strong>: TimescaleDB's ability to perform <strong>SQL joins</strong> with relational tables is a major advantage, allowing you to combine time-series data with business data (e.g., joining sensor data with a table of device locations).</p>
</li>
</ul>
<h2 id="heading-limitations-amp-when-not-to-use-timescaledb">Limitations &amp; When Not to Use TimescaleDB</h2>
<p>While powerful, TimescaleDB isn't a silver bullet for every data problem.</p>
<ul>
<li><p><strong>Low-Latency</strong>: It's generally not suited for extremely low-latency use cases (less than 2 milliseconds) where every nanosecond counts.</p>
</li>
<li><p><strong>Analytical Queries</strong>: For very large-scale analytical queries already served by a dedicated columnar warehouse, TimescaleDB may not be the most efficient option.</p>
</li>
<li><p><strong>Constraints</strong>: Scenarios that require transactional constraints across multiple hypertables may not be well-suited for TimescaleDB's architecture.</p>
</li>
</ul>
<h2 id="heading-ecosystem-amp-tooling">Ecosystem &amp; Tooling</h2>
<p>TimescaleDB integrates seamlessly with the broader PostgreSQL and time-series ecosystem.</p>
<ul>
<li><p><strong>Integrations</strong>: It has native plugins for <strong>Grafana</strong>, an adapter for <strong>Prometheus</strong>, and a sink for <strong>Kafka Connect</strong>, making it easy to connect to other tools.</p>
</li>
<li><p><strong>Compatibility</strong>: It is fully compatible with standard PostgreSQL extensions like PostGIS and <code>pgcrypto</code>, and supports features like logical replication.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>TimescaleDB stands out by extending PostgreSQL to provide a powerful, scalable, and easy-to-use solution for time-series data. It addresses the key limitations of a traditional relational database, offering high ingest rates, efficient storage, and fast analytics.</p>
]]></content:encoded></item><item><title><![CDATA[Functions and Scope]]></title><description><![CDATA[Introduction to Functions
What is a Function in JavaScript?
Functions are a fundamental concept in JavaScript, allowing you to group a set of statements together to perform a specific task.
Naming and Purpose
A function’s name should be descriptive a...]]></description><link>https://blog.sushant.dev/functions-and-scope</link><guid isPermaLink="true">https://blog.sushant.dev/functions-and-scope</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Tue, 29 Oct 2024 16:16:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730218520543/92f5c51b-34e0-4a8c-a2b2-c796f4ae9f93.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction-to-functions"><strong>Introduction to Functions</strong></h1>
<h3 id="heading-what-is-a-function-in-javascript"><strong>What is a Function in JavaScript?</strong></h3>
<p>Functions are a fundamental concept in JavaScript, allowing you to group a set of statements together to perform a specific task.</p>
<h3 id="heading-naming-and-purpose"><strong>Naming and Purpose</strong></h3>
<p>A function’s name should be descriptive and concise, indicating its purpose. It’s a good practice to use verbs as function names, as they describe the action being performed. For example, <code>calculateTotal</code>, <code>validateInput</code>, or <code>formatData</code>.</p>
<h3 id="heading-function-declaration-vs-function-expression"><strong>Function Declaration vs. Function Expression</strong></h3>
<p>In JavaScript, you can declare a function using either a function declaration (FD) or a function expression (FE).</p>
<ul>
<li><p><strong>Function Declaration (FD):</strong> <code>function myFunction() { ... }</code></p>
<ul>
<li><p>Hoists to the top of its scope</p>
</li>
<li><p>Can be called before its definition</p>
<pre><code class="lang-javascript">  <span class="hljs-comment">// Function Declaration</span>
  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">greet</span>(<span class="hljs-params"></span>) </span>{
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Hello, World!"</span>);
  }

  <span class="hljs-comment">// Calling the function</span>
  greet();  <span class="hljs-comment">// Output: Hello, World!</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Function Expression (FE):</strong> <code>const myFunction = function() { ... };</code></p>
<ul>
<li><p>Does not hoist</p>
</li>
<li><p>Must be assigned to a variable before use</p>
<pre><code class="lang-javascript">  <span class="hljs-comment">// Function Expression</span>
  <span class="hljs-keyword">const</span> greet = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Hello, World!"</span>);
  };

  <span class="hljs-comment">// Calling the function</span>
  greet();  <span class="hljs-comment">// Output: Hello, World!</span>
</code></pre>
</li>
</ul>
</li>
</ul>
<p><strong>Key Differences</strong></p>
<p>• <strong>Hoisting</strong>: Function declarations are hoisted, so they can be called before they’re defined, while function expressions are not hoisted.</p>
<p>• <strong>Naming</strong>: Function expressions can be anonymous (no name), whereas function declarations always have a name.</p>
<h3 id="heading-return-statement"><strong>Return Statement</strong></h3>
<p>The <code>return</code> statement is used to exit a function and pass a value back to the caller. You can return a value of any data type, including primitive types (e.g., numbers, strings) and complex types (e.g., objects, arrays).</p>
<p><strong>Example 1: Returning Primitive Data Types</strong></p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">add</span>(<span class="hljs-params">a, b</span>) </span>{
    <span class="hljs-keyword">return</span> a + b; <span class="hljs-comment">// returns a number</span>
}

<span class="hljs-keyword">const</span> sum = add(<span class="hljs-number">5</span>, <span class="hljs-number">3</span>);
<span class="hljs-built_in">console</span>.log(sum); <span class="hljs-comment">// Output: 8</span>
</code></pre>
<p>In this example, add returns the result of adding a and b, which is a number.</p>
<p><strong>Example 2: Returning Complex Data Types</strong></p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getUser</span>(<span class="hljs-params">name, age</span>) </span>{
    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">name</span>: name,
        <span class="hljs-attr">age</span>: age
    }; <span class="hljs-comment">// returns an object</span>
}

<span class="hljs-keyword">const</span> user = getUser(<span class="hljs-string">"Alice"</span>, <span class="hljs-number">25</span>);
<span class="hljs-built_in">console</span>.log(user); <span class="hljs-comment">// Output: { name: 'Alice', age: 25 }</span>
</code></pre>
<p>Here, getUser returns an object with name and age properties.</p>
<h3 id="heading-parameter-passing"><strong>Parameter Passing</strong></h3>
<p>Functions can take arguments, which are passed when the function is called. You can define multiple parameters separated by commas. For example: <code>function greet(name, age) { ... }</code></p>
<p><strong>1. Basic Parameter Passing</strong></p>
<p>When you pass values (arguments) to a function, they are received by parameters inside the function.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">greet</span>(<span class="hljs-params">name</span>) </span>{
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Hello, "</span> + name + <span class="hljs-string">"!"</span>);
}

greet(<span class="hljs-string">"Alice"</span>); <span class="hljs-comment">// Output: Hello, Alice!</span>
greet(<span class="hljs-string">"Bob"</span>);   <span class="hljs-comment">// Output: Hello, Bob!</span>
</code></pre>
<p>Here, name is a parameter of the greet function. When calling greet("Alice"), "Alice" is the argument passed to name.</p>
<p><strong>2. Passing Multiple Parameters</strong></p>
<p>A function can accept multiple parameters, and they are separated by commas.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">add</span>(<span class="hljs-params">a, b</span>) </span>{
    <span class="hljs-keyword">return</span> a + b;
}

<span class="hljs-built_in">console</span>.log(add(<span class="hljs-number">5</span>, <span class="hljs-number">3</span>)); <span class="hljs-comment">// Output: 8</span>
<span class="hljs-built_in">console</span>.log(add(<span class="hljs-number">10</span>, <span class="hljs-number">20</span>)); <span class="hljs-comment">// Output: 30</span>
</code></pre>
<p>Here, a and b are parameters, and we pass values like 5 and 3 as arguments when calling add.</p>
<p><strong>3. Default Parameters</strong></p>
<p>In JavaScript, you can assign default values to parameters. If an argument is not passed, the parameter takes the default value.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">greet</span>(<span class="hljs-params">name = <span class="hljs-string">"Guest"</span></span>) </span>{
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Hello, "</span> + name + <span class="hljs-string">"!"</span>);
}

greet();        <span class="hljs-comment">// Output: Hello, Guest!</span>
greet(<span class="hljs-string">"Alice"</span>); <span class="hljs-comment">// Output: Hello, Alice!</span>
</code></pre>
<p><strong>4. Rest Parameters</strong></p>
<p>The rest parameter (...args) allows you to pass any number of arguments as an array.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">sum</span>(<span class="hljs-params">...numbers</span>) </span>{
    <span class="hljs-keyword">return</span> numbers.reduce(<span class="hljs-function">(<span class="hljs-params">total, num</span>) =&gt;</span> total + num, <span class="hljs-number">0</span>);
}

<span class="hljs-built_in">console</span>.log(sum(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>));    <span class="hljs-comment">// Output: 6</span>
<span class="hljs-built_in">console</span>.log(sum(<span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>, <span class="hljs-number">7</span>)); <span class="hljs-comment">// Output: 22</span>
</code></pre>
<p>In this example, ...numbers gathers all arguments into an array, making it easy to handle an unknown number of parameters.</p>
<h3 id="heading-scope"><strong>Scope</strong></h3>
<p>In JavaScript, each function creates its own <strong>scope</strong>, meaning variables declared inside a function are not accessible outside that function. This is also called <strong>local scope</strong>.</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">showMessage</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">let</span> message = <span class="hljs-string">"Hello, World!"</span>;
    <span class="hljs-built_in">console</span>.log(message); <span class="hljs-comment">// Output: Hello, World!</span>
}

showMessage();
<span class="hljs-built_in">console</span>.log(message); <span class="hljs-comment">// Error: message is not defined</span>
</code></pre>
<p>In this example:</p>
<p>1. The message variable is declared inside the showMessage function using let.</p>
<p>2. <strong>Inside the function</strong>, console.log(message); successfully prints "Hello, World!".</p>
<p>3. <strong>Outside the function</strong>, attempting to access message results in an error (message is not defined) because message is local to showMessage and cannot be accessed from the global scope.</p>
<h3 id="heading-why-functions-are-important-for-structuring-code-and-avoiding-repetition">Why functions are important for structuring code and avoiding repetition</h3>
<ul>
<li><p><strong>Avoid Code Duplication</strong>: Functions help eliminate duplicated code by encapsulating similar logic into a single unit. This reduces the overall codebase size and makes it easier to maintain.</p>
</li>
<li><p><strong>Improved Code Readability</strong>: By breaking down complex logic into smaller, focused functions, code becomes more readable and easier to understand. This is especially important for large codebases or when working with multiple developers.</p>
</li>
<li><p><strong>Reusability</strong>: Functions enable code reuse throughout a program. When a function is written correctly, it can be called multiple times with different inputs, reducing the need to duplicate code.</p>
</li>
<li><p><strong>Easier Debugging</strong>: With functions, debugging becomes more efficient. When an issue arises, you can isolate the problem to a specific function, making it easier to identify and fix the issue.</p>
</li>
<li><p><strong>Modularity</strong>: Functions promote modularity by allowing you to develop and test individual components independently. This makes it easier to modify or replace specific parts of the code without affecting the entire program.</p>
</li>
</ul>
<h3 id="heading-javascript-hoisting"><strong>JavaScript Hoisting</strong></h3>
<p><strong>Hoisting</strong> in JavaScript is a behavior where variables and functions are moved to the top of their scope before the code is executed. This means that even if you declare a variable or function later in your code, you can still use it before its declaration, as if it were declared at the top.</p>
<p><strong>How Hoisting Works:</strong></p>
<p>1. <strong>Function Declarations</strong>: Entire function declarations are hoisted to the top, allowing functions to be called before they appear in the code.</p>
<pre><code class="lang-javascript">sayHello(); <span class="hljs-comment">// Output: Hello!</span>

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">sayHello</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Hello!"</span>);
}
</code></pre>
<p>2. <strong>Variable Declarations</strong>: Variable declarations are hoisted, but only the declaration itself (not the value assignment). This means that variables are initialized with undefined during hoisting.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(name); <span class="hljs-comment">// Output: undefined</span>
<span class="hljs-keyword">var</span> name = <span class="hljs-string">"Alice"</span>;
</code></pre>
<p>Here, var name is hoisted, but the assignment = "Alice" is not, so name is undefined until it’s assigned a value.</p>
<p><strong>let and const Hoisting</strong></p>
<p>Variables declared with let and const are also hoisted, but they remain in a <strong>“temporal dead zone”</strong> from the start of the block until the declaration is encountered. Attempting to access them before declaration results in a ReferenceError.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(age); <span class="hljs-comment">// ReferenceError: Cannot access 'age' before initialization</span>
<span class="hljs-keyword">let</span> age = <span class="hljs-number">25</span>;
</code></pre>
<p><strong><em>Temporal Dead Zone (TDZ)</em></strong></p>
<p><em>The Temporal Dead Zone (TDZ) is the period in JavaScript between the start of a block scope (e.g., within functions or loops) and the declaration of a variable using let or const. During this time, accessing the variable results in a ReferenceError because it is not yet initialized. Unlike var, which is hoisted and initialized to undefined, let and const variables cannot be referenced until their declaration is reached. Understanding TDZ is crucial for preventing errors related to uninitialized variables and promoting clear coding practices.</em></p>
]]></content:encoded></item><item><title><![CDATA[JavaScript Basics for Interviews]]></title><description><![CDATA[With 10 days of Diwali vacation ahead, I’m diving into JavaScript Mastery in 10 Days—a focused series designed to help JavaScript developers revisit core concepts and ace those must-know topics for interviews and beyond.
Data Types in JavaScript

Jav...]]></description><link>https://blog.sushant.dev/javascript-basics-for-interviews</link><guid isPermaLink="true">https://blog.sushant.dev/javascript-basics-for-interviews</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[js]]></category><category><![CDATA[interview]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Fri, 25 Oct 2024 17:15:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729876429528/654a8e27-10be-445d-99c8-4e232f564c4c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With 10 days of Diwali vacation ahead, I’m diving into <em>JavaScript Mastery in 10 Days</em>—a focused series designed to help JavaScript developers revisit core concepts and ace those must-know topics for interviews and beyond.</p>
<h2 id="heading-data-types-in-javascript"><strong>Data Types in JavaScript</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729873066249/133af7d3-76b7-4c0d-a240-5fec59e0f938.png" alt class="image--center mx-auto" /></p>
<p>JavaScript includes eight fundamental data types, categorized into two groups: <strong>Primitive</strong> and <strong>Non-Primitive</strong> types.</p>
<h3 id="heading-primitive-data-types"><strong>Primitive Data Types</strong></h3>
<p>1. <strong>String</strong>: Used for textual data, represented as a sequence of characters.</p>
<p>• Example: 'hello', "hello world!"</p>
<p>2. <strong>Number</strong>: Represents both integer and floating-point numbers.</p>
<p>• Example: 3, 3.234, 3e-2</p>
<p>3. <strong>BigInt</strong>: An integer with arbitrary precision, introduced in ES2020, used for very large integers.</p>
<p>• Example: 900719925124740999n, 1n</p>
<p>4. <strong>Boolean</strong>: Represents a logical value—either true or false.</p>
<p>5. <strong>Null</strong>: A special keyword representing an intentional “no value” or “unknown”.</p>
<p>6. <strong>Undefined</strong>: Indicates a variable that has been declared but not initialized or assigned a value.</p>
<p>7. <strong>Symbol</strong>: A unique and immutable data type, introduced in ES6, often used for unique property keys in objects.</p>
<p>• Example: let value = Symbol();</p>
<h3 id="heading-non-primitive-data-types"><strong>Non-Primitive Data Types</strong></h3>
<p>1. <strong>Object</strong>: A collection of key-value pairs, where keys are strings and values can be any data type.</p>
<p>• Example: let student = { name: "John" };</p>
<p>2. <strong>Array</strong>: A specialized type of object that stores an ordered collection of values, which can be of any data type.</p>
<p>• Example: let numbers = [1, 2, 3, 4, 5];</p>
<p><strong>Additional Notes</strong></p>
<p>• JavaScript is a <strong>dynamically typed language</strong>, allowing variables to hold different data types at different times.</p>
<p>• The <strong>typeof operator</strong> can be used to determine the data type of a variable.</p>
<p>• JavaScript differentiates between <strong>null</strong> and <strong>undefined</strong> values, with distinct meanings and uses.</p>
<h2 id="heading-javascript-variable-declarations-var-let-and-const"><strong>JavaScript Variable Declarations:</strong> var, let , and const</h2>
<p>In JavaScript, you can declare variables using var, let, or const, each serving a slightly different purpose:</p>
<p><strong>1. var:</strong></p>
<p>• <em>Scope</em>: var is function-scoped, which means it’s only accessible within the function it’s declared in. However, if declared outside of a function, it becomes available globally.</p>
<p>• <em>Usage</em>: While var has been used historically, it can lead to unexpected behavior in larger codebases, so it’s less common in modern JavaScript.</p>
<p><strong>2. let:</strong></p>
<p>• <em>Scope</em>: let is block-scoped, meaning it’s only accessible within the block {} it’s declared in.</p>
<p>• <em>Usage</em>: let is ideal when you plan to reassign the variable later in your code.</p>
<p><strong>3. const:</strong></p>
<p>• <em>Scope</em>: Like let, const is also block-scoped.</p>
<p>• <em>Reassignment</em>: Once assigned, a const variable cannot be reassigned. Use const when you don’t want the value to change, like with configuration settings or constants.</p>
<p>• <em>Mutable Objects</em>: For objects and arrays, const prevents reassignment, but you can still modify properties or elements within.</p>
<p><strong>Quick Tip</strong>: In modern JavaScript, use const by default, let when you need to reassign, and generally avoid var for cleaner, more predictable code.</p>
<h2 id="heading-javascript-operators"><strong>JavaScript Operators</strong></h2>
<p>JavaScript provides a variety of operators to perform different types of operations on values. Here’s a concise summary:</p>
<h3 id="heading-1-arithmetic-operators"><strong>1. Arithmetic Operators</strong></h3>
<p>Used for basic mathematical operations:</p>
<p>• +  (Addition)</p>
<p>• -  (Subtraction)</p>
<p>• *  (Multiplication)</p>
<p>• /  (Division)</p>
<p>• %  (Modulus)</p>
<p>• ** (Exponentiation)</p>
<p>• ++ (Increment)</p>
<p>• -- (Decrement)</p>
<h3 id="heading-2-assignment-operators"><strong>2. Assignment Operators</strong></h3>
<p>Used to assign values to variables:</p>
<p>• =  (Assign)</p>
<p>• += (Add and assign)</p>
<p>• -= (Subtract and assign)</p>
<p>• *= (Multiply and assign)</p>
<p>• /= (Divide and assign)</p>
<p>• %= (Modulus and assign)</p>
<h3 id="heading-3-comparison-operators"><strong>3. Comparison Operators</strong></h3>
<p>Used to compare two values:</p>
<p>• ==  (Equal)</p>
<p>• === (Strict equal)</p>
<p>• !=  (Not equal)</p>
<p>• !== (Strict not equal)</p>
<p>• &gt;   (Greater than)</p>
<p>• &lt;   (Less than)</p>
<p>• &gt;=  (Greater than or equal to)</p>
<p>• &lt;=  (Less than or equal to)</p>
<h3 id="heading-4-logical-operators"><strong>4. Logical Operators</strong></h3>
<p>Used with boolean values:</p>
<p>• &amp;&amp; (AND)</p>
<p>• || (OR)</p>
<p>• !  (NOT)</p>
<h3 id="heading-5-bitwise-operators"><strong>5. Bitwise Operators</strong></h3>
<p>Operate on the binary representation of numbers:</p>
<p>• &amp;  (AND)</p>
<p>• |  (OR)</p>
<p>• ^  (XOR)</p>
<p>• ~  (NOT)</p>
<p>• &lt;&lt; (Left shift)</p>
<p>• &gt;&gt; (Right shift)</p>
<h3 id="heading-6-string-operators"><strong>6. String Operators</strong></h3>
<p>Primarily used for concatenating strings:</p>
<p>• +  (Concatenation)</p>
<p>• += (Concatenation assignment)</p>
<h3 id="heading-7-conditional-ternary-operator"><strong>7. Conditional (Ternary) Operator</strong></h3>
<p>A shorthand for if-else:</p>
<p>• condition ? valueIfTrue : valueIfFalse</p>
<h3 id="heading-8-type-operators"><strong>8. Type Operators</strong></h3>
<p>• typeof  (Returns the type of a variable)</p>
<p>• instanceof (Checks if an object is an instance of a specific class)</p>
<h3 id="heading-9-nullish-coalescing-operator"><strong>9. Nullish Coalescing Operator</strong></h3>
<p>Provides a default value for null or undefined:</p>
<p>• ??</p>
<h3 id="heading-10-optional-chaining-operator"><strong>10. Optional Chaining Operator</strong></h3>
<p>Allows safe access to deeply nested object properties:</p>
<p>• ?.</p>
<h2 id="heading-control-structures-in-javascript"><strong>Control Structures in JavaScript</strong></h2>
<p>Control structures are essential for directing the flow of execution in a program. In JavaScript, they allow you to execute different blocks of code based on certain conditions or iterate through data. Here’s a summary of the main control structures:</p>
<h3 id="heading-1-conditional-statements"><strong>1. Conditional Statements</strong></h3>
<p>Conditional statements execute different code blocks based on specific conditions.</p>
<p><strong>• if Statement:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-keyword">if</span> (condition) {
    <span class="hljs-comment">// Code to execute if condition is true</span>
}
</code></pre>
<p><strong>• if...else Statement:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-keyword">if</span> (condition) {
    <span class="hljs-comment">// Code to execute if condition is true</span>
} <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// Code to execute if condition is false</span>
}
</code></pre>
<p><strong>• else if Statement:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-keyword">if</span> (condition1) {
    <span class="hljs-comment">// Code for condition1</span>
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (condition2) {
    <span class="hljs-comment">// Code for condition2</span>
} <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// Code if none of the above conditions are true</span>
}
</code></pre>
<p><strong>• switch Statement:</strong></p>
<p>A cleaner way to execute different code blocks based on the value of a variable.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">switch</span> (expression) {
    <span class="hljs-keyword">case</span> value1:
        <span class="hljs-comment">// Code to execute if expression === value1</span>
        <span class="hljs-keyword">break</span>;
    <span class="hljs-keyword">case</span> value2:
        <span class="hljs-comment">// Code to execute if expression === value2</span>
        <span class="hljs-keyword">break</span>;
    <span class="hljs-keyword">default</span>:
        <span class="hljs-comment">// Code to execute if no cases match</span>
}
</code></pre>
<h3 id="heading-2-loops"><strong>2. Loops</strong></h3>
<p>Loops allow you to execute a block of code multiple times.</p>
<p><strong>• for Loop:</strong></p>
<p>Used for iterating over a range of values.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">5</span>; i++) {
    <span class="hljs-comment">// Code to execute on each iteration</span>
}
</code></pre>
<p><strong>• while Loop:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-keyword">while</span> (condition) {
    <span class="hljs-comment">// Code to execute while condition is true</span>
}
</code></pre>
<p><strong>• do...while Loop:</strong></p>
<p>Similar to while, but the code block is executed at least once before the condition is tested.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">do</span> {
    <span class="hljs-comment">// Code to execute at least once</span>
} <span class="hljs-keyword">while</span> (condition);
</code></pre>
<p><strong>• for...of Loop:</strong></p>
<p>Iterates over iterable objects (like arrays or strings).</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> key <span class="hljs-keyword">in</span> object) {
    <span class="hljs-comment">// Code to execute for each property</span>
}
</code></pre>
<p>• for...in <strong>Loop</strong>:</p>
<p>Iterates over the enumerable properties of an object.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> key <span class="hljs-keyword">in</span> object) {
    <span class="hljs-comment">// Code to execute for each property</span>
}
</code></pre>
<h3 id="heading-3-exception-handling"><strong>3. Exception Handling</strong></h3>
<p>To manage errors gracefully, JavaScript provides a way to handle exceptions.</p>
<p><strong>• try...catch Statement:</strong></p>
<p>Executes code that might throw an error and catches any errors that occur.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">try</span> {
    <span class="hljs-comment">// Code that may throw an error</span>
} <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-comment">// Code to execute if an error occurs</span>
} <span class="hljs-keyword">finally</span> {
    <span class="hljs-comment">// Code that runs regardless of an error</span>
}
</code></pre>
<h2 id="heading-type-coercion-in-javascript"><strong>Type Coercion in JavaScript</strong></h2>
<p>Type coercion is the process by which JavaScript automatically converts values from one type to another. This can happen in various situations, especially when performing operations involving different data types.</p>
<h3 id="heading-1-implicit-coercion"><strong>1. Implicit Coercion</strong></h3>
<p>JavaScript automatically converts types when necessary.</p>
<p>• <strong>String and Number</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">5</span> + <span class="hljs-string">"5"</span>); <span class="hljs-comment">// "55" (Number is coerced to String)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"5"</span> - <span class="hljs-number">2</span>); <span class="hljs-comment">// 3 (String is coerced to Number)</span>
</code></pre>
<p>• <strong>Boolean and Number</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">1</span> + <span class="hljs-literal">true</span>);  <span class="hljs-comment">// 2 (true is coerced to 1)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-number">0</span> + <span class="hljs-literal">false</span>);  <span class="hljs-comment">// 0 (false is coerced to 0)</span>
</code></pre>
<p>• <strong>Boolean and String</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-string">"The answer is "</span> + <span class="hljs-literal">true</span>); <span class="hljs-comment">// "The answer is true" (true is coerced to String)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"5"</span> + <span class="hljs-literal">false</span>); <span class="hljs-comment">// "5false" (false is coerced to String)</span>
</code></pre>
<h3 id="heading-2-explicit-coercion"><strong>2. Explicit Coercion</strong></h3>
<p>You can manually convert types using functions.</p>
<p>• <strong>Number to String</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">let</span> num = <span class="hljs-number">10</span>;
<span class="hljs-keyword">let</span> str = <span class="hljs-built_in">String</span>(num);
<span class="hljs-built_in">console</span>.log(str); <span class="hljs-comment">// "10"</span>
</code></pre>
<p>• <strong>String to Number</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">let</span> strNum = <span class="hljs-string">"20"</span>;
<span class="hljs-keyword">let</span> numFromStr = <span class="hljs-built_in">Number</span>(strNum);
<span class="hljs-built_in">console</span>.log(numFromStr); <span class="hljs-comment">// 20</span>
</code></pre>
<p>• <strong>Boolean to String</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">let</span> boolValue = <span class="hljs-literal">true</span>;
<span class="hljs-keyword">let</span> strBool = <span class="hljs-built_in">String</span>(boolValue);
<span class="hljs-built_in">console</span>.log(strBool); <span class="hljs-comment">// "true"</span>
</code></pre>
<p>• <strong>String to Boolean</strong> (using double negation):</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">let</span> nonEmptyString = <span class="hljs-string">"Hello"</span>;
<span class="hljs-keyword">let</span> isTrue = !!nonEmptyString;
<span class="hljs-built_in">console</span>.log(isTrue); <span class="hljs-comment">// true (non-empty string is truthy)</span>
</code></pre>
<h3 id="heading-3-coercion-in-comparisons"><strong>3. Coercion in Comparisons</strong></h3>
<p>Different types are compared through coercion.</p>
<p>• <strong>Using</strong> \== <strong>(Equality)</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">0</span> == <span class="hljs-string">"0"</span>); <span class="hljs-comment">// true (String is coerced to Number)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">false</span> == <span class="hljs-string">""</span>); <span class="hljs-comment">// true (Both are coerced to Number)</span>
</code></pre>
<p>• <strong>Using</strong> \=== <strong>(Strict Equality)</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">0</span> === <span class="hljs-string">"0"</span>); <span class="hljs-comment">// false (No coercion, different types)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">false</span> === <span class="hljs-string">""</span>); <span class="hljs-comment">// false (No coercion, different types)</span>
</code></pre>
<h2 id="heading-equality-operator"><strong>\== (Equality Operator)</strong></h2>
<p>The == operator, known as the equality operator, compares two values for equality <strong>after performing type coercion</strong> if they are of different types. This means JavaScript will convert the values to a common type before making the comparison.</p>
<p><strong>Examples of ==:</strong></p>
<p><strong>1. Different Types:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">5</span> == <span class="hljs-string">'5'</span>);    <span class="hljs-comment">// true (String '5' is coerced to Number 5)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">false</span> == <span class="hljs-number">0</span>);   <span class="hljs-comment">// true (false is coerced to 0)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">null</span> == <span class="hljs-literal">undefined</span>); <span class="hljs-comment">// true (both are treated as equal)</span>
</code></pre>
<p>2. <strong>Same Types</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">10</span> == <span class="hljs-number">10</span>);     <span class="hljs-comment">// true</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'hello'</span> == <span class="hljs-string">'hello'</span>); <span class="hljs-comment">// true</span>
</code></pre>
<h2 id="heading-strict-equality-operator"><strong>\=== (Strict Equality Operator)</strong></h2>
<p>The === operator, known as the strict equality operator, compares two values for equality <strong>without performing type coercion</strong>. If the types are different, the comparison will return false.</p>
<p><strong>Examples of ===:</strong></p>
<p>1. <strong>Different Types</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">5</span> === <span class="hljs-string">'5'</span>);    <span class="hljs-comment">// false (different types, no coercion)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">false</span> === <span class="hljs-number">0</span>);   <span class="hljs-comment">// false (different types)</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-literal">null</span> === <span class="hljs-literal">undefined</span>); <span class="hljs-comment">// false (different types)</span>
</code></pre>
<p>2. <strong>Same Types</strong>:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">console</span>.log(<span class="hljs-number">10</span> === <span class="hljs-number">10</span>);     <span class="hljs-comment">// true</span>
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'hello'</span> === <span class="hljs-string">'hello'</span>); <span class="hljs-comment">// true</span>
</code></pre>
<p>That’s a wrap for Day 1! These basic concepts are essential for anyone looking to delve deeper into more advanced topics in JavaScript. Understanding these fundamentals will provide you with a solid foundation to build upon as we explore more complex topics in the coming days.</p>
<p>Stay tuned for Day 2, where we’ll dive into variable declarations and scope! Happy coding!</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with Docker]]></title><description><![CDATA[What problem Docker solves?
You've probably heard this many times in software development: It works on my machine.
Docker solves this problem of it works on my machine. When developers create software, it often runs smoothly on their computer during ...]]></description><link>https://blog.sushant.dev/getting-started-with-docker</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-docker</guid><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Fri, 02 Feb 2024 16:01:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706256122495/12ad6bb9-cdbb-4685-9e61-ac8e8d4f9dd7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-problem-docker-solves">What problem Docker solves?</h2>
<p>You've probably heard this many times in software development: <strong>It works on my machine.</strong></p>
<p>Docker solves this problem of it works on my machine. When developers create software, it often runs smoothly on their computer during development, but when they try to move it to a different environment, such as a server or a colleague's machine, issues can arise. This happens because the software might depend on specific settings, libraries, or configurations that are different on other machines.</p>
<p>Docker provides a solution by allowing developers to package their applications along with all the dependencies, libraries, and configurations into a standardized container. This container can then be easily moved and run consistently across different environments, ensuring that the application behaves the same way everywhere. It simplifies the process of software deployment, making it more reliable and efficient.</p>
<h3 id="heading-what-is-docker">What is Docker?</h3>
<p>Docker is a platform that enables developers to build, package, and distribute applications as lightweight, portable containers. These containers can run consistently across various environments, ensuring that the application behaves the same way regardless of where it's deployed.</p>
<h3 id="heading-why-is-docker-important-in-the-world-of-software-development"><strong>Why is Docker important in the world of software development?</strong></h3>
<p>Docker simplifies the process of software development by allowing developers to bundle their applications and dependencies into containers. These containers encapsulate everything needed for the application to run, making it easy to deploy and run the same application on different machines or servers. Docker also promotes consistency across development, testing, and production environments, reducing the "it works on my machine" problem. It enhances collaboration among team members and facilitates efficient scaling and deployment of applications.</p>
<h3 id="heading-containerization"><strong>Containerization</strong></h3>
<p>Imagine a shipping container – those large, standardized metal boxes you see on cargo ships and trucks. This container is a self-contained unit that holds various goods.</p>
<p>In the same way, a software container is like a digital version of this shipping container. It packs up an application and everything it needs to run – the code, libraries, and dependencies. This digital container ensures that the application can be easily transported and deployed on different systems, just like the physical shipping container makes it simple to move goods between ships, trucks, and trains. The application inside the container runs consistently, regardless of where it's placed, making it practical and efficient for software development and deployment.</p>
<h3 id="heading-how-containers-differ-from-virtual-machines">How containers differ from virtual machines</h3>
<p><strong>Real-world Analogy: Running a Café</strong></p>
<p><strong>1. Traditional Approach (Virtual Machines):</strong></p>
<ul>
<li><p>Imagine you're running a café, and each dish is prepared in its own fully equipped kitchen (virtual machine). These kitchens (VMs) are separate and have everything needed for cooking – stove, utensils, and ingredients.</p>
</li>
<li><p>Now, if you want to add a new dish (application) to your menu, you need to set up an entirely new kitchen (VM) for it. This can be resource-intensive because each kitchen runs its own operating system.</p>
</li>
</ul>
<p><strong>2. Container Approach:</strong></p>
<ul>
<li><p>Containers are like having a versatile food truck. The food truck (container) comes with its own kitchen, but it shares the main resources like the stove, water supply, and utensils with other food trucks (containers).</p>
</li>
<li><p>When you want to add a new dish (application), you simply use another section of the food truck (container) with its specific ingredients. It's like having different compartments for different dishes but sharing the overall kitchen space efficiently.</p>
</li>
</ul>
<p><strong>In Software Terms:</strong></p>
<ul>
<li><p><strong>Virtual Machines (VMs):</strong></p>
<ul>
<li>Each VM is a full-fledged kitchen (with its operating system) for one dish (application). This can be resource-heavy as it duplicates the entire setup for each dish.</li>
</ul>
</li>
<li><p><strong>Containers:</strong></p>
<ul>
<li>Containers are like dedicated cooking stations within a shared kitchen. Each container has only what's needed for a specific dish (application), sharing the common resources efficiently.</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706884906601/a97abf6d-f83d-4239-a2e4-8520dd4efae5.avif" alt class="image--center mx-auto" /></p>
<p><strong>Installing Docker on Different Operating Systems:</strong></p>
<ol>
<li><p><strong>Windows:</strong></p>
<ul>
<li><p>Go to the Docker website and download the Docker Desktop for Windows.</p>
</li>
<li><p>Run the installer and follow the on-screen instructions.</p>
</li>
<li><p>Docker Desktop will prompt you to enable Hyper-V. Allow this as it's essential for running containers on Windows.</p>
</li>
<li><p>Once the installation is complete, Docker will start automatically.</p>
</li>
</ul>
</li>
<li><p><strong>macOS:</strong></p>
<ul>
<li><p>Visit the Docker website and download Docker Desktop for Mac.</p>
</li>
<li><p>Open the downloaded .dmg file and drag the Docker icon to your Applications folder.</p>
</li>
<li><p>Run Docker from Applications. It may ask for your system password during installation.</p>
</li>
<li><p>Docker Desktop will launch when the installation is done.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Linux:</strong></p>
<ul>
<li><p>Docker can be installed on various Linux distributions. Check your distribution's documentation for specific instructions.</p>
</li>
<li><p>On Ubuntu, you can use the following commands:</p>
<pre><code class="lang-bash">  sudo apt update
  sudo apt install docker.io
</code></pre>
</li>
<li><p>Start the Docker service:</p>
<pre><code class="lang-bash">  sudo systemctl start docker
</code></pre>
</li>
<li><p>To enable Docker to start on boot:</p>
<pre><code class="lang-bash">  sudo systemctl <span class="hljs-built_in">enable</span> docker
</code></pre>
</li>
<li><p>Add your user to the <code>docker</code> group to run Docker commands without <code>sudo</code>:</p>
<pre><code class="lang-bash">  sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
</li>
<li><p>Log out and log back in to apply the changes.</p>
</li>
</ul>
<p><a target="_blank" href="https://docs.docker.com/desktop/">Installtion guide</a></p>
<h3 id="heading-docker-components">Docker Components:</h3>
<p><strong>1. Docker Engine:</strong></p>
<ul>
<li>The Docker Engine is the heart of Docker. It's the construction site manager that oversees everything. Just as a construction manager handles building tasks, the Docker Engine manages containers. It takes your construction plans (Docker images) and turns them into a functional building (running containers).</li>
</ul>
<p><strong>2. Docker Hub:</strong></p>
<ul>
<li>Docker Hub is like a blueprint library for builders. It's an online repository where you can find and share pre-made construction plans (Docker images). For instance, you can discover a blueprint for a WordPress website or a database server on Docker Hub.</li>
</ul>
<p><strong>3. Docker CLI (Command-Line Interface):</strong></p>
<ul>
<li><p>The Docker CLI is your toolkit control panel. Using simple commands, you communicate with the Docker Engine to execute construction tasks. For example:</p>
<ul>
<li><p><code>docker run wordpress</code> instructs the Docker Engine to start building a WordPress website.</p>
</li>
<li><p><code>docker stop container_id</code> tells the Docker Engine to halt the construction of a specific container.</p>
</li>
</ul>
</li>
</ul>
<p><strong>4. Docker Compose:</strong></p>
<ul>
<li><p>Docker Compose is like an architect's blueprint that lays out the entire construction plan for your project. It's a tool for defining and managing multi-container applications. With a simple <code>docker-compose.yml</code> file, you can specify which containers to build, how they should connect, and their configurations. For example:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">yamlCopy codeversion:</span> <span class="hljs-string">'3'</span>
  <span class="hljs-attr">services:</span>
    <span class="hljs-attr">web:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">database:</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">postgres</span>
</code></pre>
<p>  Running <code>docker-compose up</code> will start constructing both a web server (using the Nginx image) and a database (using the PostgreSQL image) according to your blueprint.</p>
</li>
</ul>
<p><strong>Example Scenario: Building a Website:</strong> Imagine you want to construct a website. You find a blueprint for a web server on Docker Hub. Using the Docker CLI, you run <code>docker run -p 80:80 nginx</code> to start building the web server. Now, anyone accessing your construction site (web browser) sees the website taking shape.</p>
<p>Later, you decide to add a database to your website. You create a Docker Compose file specifying both the web server and the database containers. Running <code>docker-compose up</code> constructs the entire project, with the web server and database connected as per your blueprint.</p>
<p>In this analogy, Docker components work together seamlessly to simplify the construction (deployment) of your project, just as a construction site manager, blueprint library, toolkit control panel, and architect's plan collaborate to build a physical structure.</p>
<h3 id="heading-creating-your-first-container"><strong>Creating Your First Container:</strong></h3>
<p><strong>1. Pulling the Docker Image:</strong></p>
<ul>
<li><p>Open your terminal or command prompt.</p>
</li>
<li><p>Run the following command to pull the Nginx image from Docker Hub:</p>
<pre><code class="lang-bash">  docker pull nginx
</code></pre>
</li>
<li><p>This command fetches the Nginx image from Docker Hub and stores it on your local machine.</p>
</li>
</ul>
<p>Output:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706886798915/0312f934-de15-4adf-9685-4f652cfa333d.png" alt class="image--center mx-auto" /></p>
<p><strong>2. Running a Container:</strong></p>
<ul>
<li><p>Now, let's run a container using the pulled Nginx image:</p>
<pre><code class="lang-bash">  docker run -d -p 8080:80 --name my-nginx nginx
</code></pre>
<ul>
<li><p>The <code>-d</code> flag runs the container in the background (detached mode).</p>
</li>
<li><p>The <code>-p 8080:80</code> flag maps port 8080 on your host machine to port 80 inside the container. This means you can access the web server at <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
</li>
<li><p><code>--name my-nginx</code> assigns a custom name ("my-nginx") to your container for easy reference.</p>
</li>
<li><p>Finally, <code>nginx</code> specifies the image to use.</p>
</li>
</ul>
</li>
</ul>
<p><strong>3. Accessing the Web Server:</strong></p>
<ul>
<li><p>Open your web browser and navigate to <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a>.</p>
</li>
<li><p>You should see the default Nginx welcome page, indicating that your containerized web server is up and running.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706886906960/0d7c53b6-2f8d-40f0-b8e4-ca76a5e69a5f.png" alt class="image--center mx-auto" /></p>
<p><strong>Explanation:</strong></p>
<ul>
<li><p>The <code>docker run</code> command creates and starts a container based on the specified image.</p>
</li>
<li><p>The <code>-d</code> flag runs the container in the background, allowing you to continue using your terminal.</p>
</li>
<li><p>The <code>-p</code> flag maps ports between the host and the container. In this case, port 8080 on your machine is linked to port 80 in the container.</p>
</li>
<li><p><code>--name</code> assigns a custom name to your container (optional but useful for identification).</p>
</li>
<li><p>The last argument (<code>nginx</code>) specifies the image to use for the container.</p>
</li>
</ul>
<h3 id="heading-basic-docker-dommands">Basic Docker Dommands</h3>
<p><strong>1. Listing Running Containers:</strong></p>
<ul>
<li><p>To see which containers are currently running, use:</p>
<pre><code class="lang-bash">  docker ps
</code></pre>
</li>
<li><p>This command displays a list of running containers along with essential information such as container ID, names, ports, etc.</p>
</li>
</ul>
<p><strong>2. Stopping a Container:</strong></p>
<ul>
<li><p>If you want to stop a running container, use:</p>
<pre><code class="lang-bash">  docker stop container_id_or_name
</code></pre>
</li>
<li><p>Replace <code>container_id_or_name</code> with the actual container ID or name.</p>
</li>
</ul>
<p><strong>3. Starting a Stopped Container:</strong></p>
<ul>
<li><p>To start a previously stopped container, use:</p>
<pre><code class="lang-bash">  docker start container_id_or_name
</code></pre>
</li>
</ul>
<p><strong>4. Removing a Container:</strong></p>
<ul>
<li><p>If you want to remove a stopped container, use:</p>
<pre><code class="lang-bash">  docker rm container_id_or_name
</code></pre>
</li>
<li><p>This deletes the specified container, freeing up resources.</p>
</li>
</ul>
<p><strong>5. Viewing all Containers (including Stopped Ones):</strong></p>
<ul>
<li><p>To see all containers, including those that are stopped, use:</p>
<pre><code class="lang-bash">  docker ps -a
</code></pre>
</li>
</ul>
<p><strong>6. Executing Commands Inside a Container:</strong></p>
<ul>
<li><p>You can run commands directly inside a running container. For example:</p>
<pre><code class="lang-bash">  docker <span class="hljs-built_in">exec</span> -it container_id_or_name /bin/bash
</code></pre>
<ul>
<li>This opens a bash shell inside the specified container, allowing you to execute commands within its environment.</li>
</ul>
</li>
</ul>
<p><strong>7. Viewing Docker Images:</strong></p>
<ul>
<li><p>To see a list of locally stored Docker images, use:</p>
<pre><code class="lang-bash">  docker images
</code></pre>
</li>
</ul>
<p><strong>8. Removing a Docker Image:</strong></p>
<ul>
<li><p>If you want to remove a Docker image from your local machine, use:</p>
<pre><code class="lang-bash">  docker rmi image_id_or_name
</code></pre>
<ul>
<li>This deletes the specified image.</li>
</ul>
</li>
</ul>
<p>Let's try to delete the container we started earlier. Enter the command <code>docker ps</code> which displays the list of containers currently running. Then, check if our container is listed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706887924939/10f5902f-14a9-48d4-9055-e4171fb0513f.png" alt class="image--center mx-auto" /></p>
<p>Now, to stop the running container, type <code>docker stop container_id</code> which, in our case, is <code>b808fc6c742b</code>, this will stop the container. You can check this by again using <code>docker ps</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706888231395/e223d226-acbb-4a4a-8236-b7075b189645.png" alt class="image--center mx-auto" /></p>
<p>There are no containers running now to check stopped containers also you can use the <code>-a</code> flag, which will list all the containers</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706888474507/d90fad70-2759-46df-9ccd-000999764436.png" alt class="image--center mx-auto" /></p>
<p>to restart the stopped container, you can use the <code>docker start container_id</code> command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706888740756/a4011cb2-48cc-4a76-b515-08370636c51c.png" alt class="image--center mx-auto" /></p>
<p>to delete the container, first, stop the container, and then type <code>docker rm container_id</code> this command will delete the stopped container.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706888919020/29a6846e-0469-470f-accd-2ce83c6ba234.png" alt class="image--center mx-auto" /></p>
<p>Congratulations! Now you know how to spin up containers, but it's not everything about Docker. There are still lots of concepts remaining, which I will cover in my next blog. I will add the link for it soon.</p>
]]></content:encoded></item><item><title><![CDATA[Strings in JavaScript]]></title><description><![CDATA[In JavaScript, a string is a sequence of characters used to represent text. Strings are used to store and manipulate textual data. They are created by enclosing text within single quotes (' '), double quotes (" "), or backticks (` `). Here are differ...]]></description><link>https://blog.sushant.dev/strings-in-javascript</link><guid isPermaLink="true">https://blog.sushant.dev/strings-in-javascript</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[string]]></category><category><![CDATA[Methods]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Mon, 30 Oct 2023 08:52:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698651721168/1efa4fbe-2153-4eda-8af7-5d9874fe2a70.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In JavaScript, a string is a sequence of characters used to represent text. Strings are used to store and manipulate textual data. They are created by enclosing text within single quotes (' '), double quotes (" "), or backticks (` `). Here are different ways to define a string in JavaScript:</p>
<ol>
<li><p><strong>Using Single Quotes:</strong></p>
<pre><code class="lang-javascript"> <span class="hljs-keyword">let</span> myString = <span class="hljs-string">'This is a string using single quotes.'</span>;
</code></pre>
</li>
<li><p><strong>Using Double Quotes:</strong></p>
<pre><code class="lang-javascript"> <span class="hljs-keyword">let</span> anotherString = <span class="hljs-string">"This is a string using double quotes."</span>;
</code></pre>
</li>
<li><p><strong>Using Backticks (Template literals):</strong></p>
<p> Template literals introduced in ES6 allow for more flexible string definitions. They allow embedding expressions within strings using <code>${}</code>.</p>
<pre><code class="lang-javascript"> <span class="hljs-keyword">let</span> name = <span class="hljs-string">'John'</span>;
 <span class="hljs-keyword">let</span> age = <span class="hljs-number">30</span>;
 <span class="hljs-keyword">let</span> templateString = <span class="hljs-string">`My name is <span class="hljs-subst">${name}</span> and I'm <span class="hljs-subst">${age}</span> years old.`</span>;
</code></pre>
</li>
</ol>
<p>Strings in JavaScript are immutable, meaning their content cannot be changed once they are created. However, you can create new strings based on existing ones through various methods or string manipulation functions. JavaScript provides numerous built-in string methods to perform operations like concatenation, splitting, substring extraction, searching, and more.</p>
<h3 id="heading-string-methods">String Methods</h3>
<p>There are numerous string methods provided by JavaScript for our use. Not all of them are important, and it's not easy to memorize all of them. Only the important ones, which are used on a daily basis, will be discussed.</p>
<p>When you console log the string, you'll see that it has one property and numerous methods.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698652897215/b8d545e1-9ce9-4ef4-b866-865be2ea9810.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><code>length</code>:</p>
<p>  The <code>length</code> property in JavaScript is used to determine the number of characters in a string. It returns the count of characters in a string, including letters, numbers, spaces, and punctuation.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> myString = <span class="hljs-string">"Hello, World!"</span>;
  <span class="hljs-built_in">console</span>.log(myString.length); <span class="hljs-comment">// Output: 13</span>
</code></pre>
<p>  In this example, <code>myString.length</code> will output <code>13</code> because the string "Hello, World!" contains 13 characters including letters, a comma, a space, and an exclamation mark.</p>
</li>
<li><p><code>slice():</code></p>
<p>  The <code>slice()</code> method in JavaScript extracts a section of a string and returns it as a new string, without modifying the original string.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.slice(startIndex, endIndex)
</code></pre>
<p>  Parameters:</p>
<ul>
<li><p><code>startIndex</code>: The index where the extraction begins.</p>
</li>
<li><p><code>endIndex</code> (optional): The index at which the extraction ends (not included in the extracted string). If not specified, extraction continues to the end of the string.</p>
</li>
</ul>
</li>
</ul>
<p>    Example:</p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> str = <span class="hljs-string">"Hello, World!"</span>;
    <span class="hljs-keyword">const</span> sliced = str.slice(<span class="hljs-number">7</span>, <span class="hljs-number">12</span>);
    <span class="hljs-built_in">console</span>.log(sliced); <span class="hljs-comment">// Output: "World"</span>
</code></pre>
<p>    In the example, <code>str.slice(7, 12)</code> extracts the characters from index 7 (inclusive) to index 12 (exclusive) from the <code>str</code> string, resulting in the string "World".</p>
<ul>
<li><p><code>substring():</code></p>
<p>  The <code>substring()</code> method in JavaScript is used to extract a portion of a string and return it as a new string. It takes two parameters: the starting index and the ending index (optional). If the ending index is not specified, <code>substring()</code> will extract from the start index to the end of the string.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> str = <span class="hljs-string">"This is a sample string"</span>;
  <span class="hljs-keyword">const</span> extracted = str.substring(<span class="hljs-number">5</span>, <span class="hljs-number">10</span>);
  <span class="hljs-built_in">console</span>.log(extracted); <span class="hljs-comment">// Output: "is a "</span>
</code></pre>
<p>  In this example, <code>substring(5, 10)</code> extracts characters from index 5 (inclusive) to index 10 (exclusive) from the string <code>str</code>. The extracted substring is <code>"is a "</code>. If the second parameter (ending index) is omitted, the substring would extend to the end of the string.</p>
</li>
<li><p><code>substr():</code><br />  The <code>substr()</code> method in JavaScript extracts a portion of a string, starting from a specified index to the number of characters indicated.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.substr(startIndex, length)
</code></pre>
<ul>
<li><p><code>startIndex</code>: The index from which to start extracting characters. If negative, it represents an offset from the end of the string.</p>
</li>
<li><p><code>length</code> (optional): The number of characters to extract. If omitted, it extracts characters until the end of the string.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> sentence = <span class="hljs-string">"The quick brown fox"</span>;
    <span class="hljs-built_in">console</span>.log(sentence.substr(<span class="hljs-number">4</span>, <span class="hljs-number">5</span>)); <span class="hljs-comment">// Outputs: "quick"</span>
    <span class="hljs-built_in">console</span>.log(sentence.substr(<span class="hljs-number">10</span>)); <span class="hljs-comment">// Outputs: "brown fox"</span>
    <span class="hljs-built_in">console</span>.log(sentence.substr(<span class="hljs-number">-3</span>)); <span class="hljs-comment">// Outputs: "fox"</span>
</code></pre>
<p>    In this example:</p>
<ul>
<li><p><code>sentence.substr(4, 5)</code> extracts 5 characters starting from the index 4, which returns "quick".</p>
</li>
<li><p><code>sentence.substr(10)</code> starts from index 10 and extracts characters until the end, resulting in "brown fox".</p>
</li>
<li><p><code>sentence.substr(-3)</code> starts 3 characters from the end of the string and extracts until the end, resulting in "fox".</p>
</li>
</ul>
<ul>
<li><p><code>replace():</code></p>
<p>  The <code>replace()</code> method in JavaScript is used to replace the first occurrence of a specified string or pattern within a larger string with another string.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.replace(searchValue, replaceValue)
</code></pre>
<ul>
<li><p><code>searchValue</code>: The string or regular expression to be replaced.</p>
</li>
<li><p><code>replaceValue</code>: The string that replaces the found value.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> sentence = <span class="hljs-string">"I love JavaScript. JavaScript is amazing!"</span>;
    <span class="hljs-keyword">const</span> newSentence = sentence.replace(<span class="hljs-string">"JavaScript"</span>, <span class="hljs-string">"coding"</span>);
    <span class="hljs-built_in">console</span>.log(newSentence);
    <span class="hljs-comment">// Output: "I love coding. JavaScript is amazing!"</span>
</code></pre>
<p>    In this example, the <code>replace()</code> method is used to find first occurrences of "JavaScript" in the <code>sentence</code> string and replace them with "coding", creating a new string assigned to <code>newSentence</code>.</p>
<ul>
<li><p><code>replaceAll():</code></p>
<p>  The <code>replaceAll()</code> method in JavaScript is used to replace all occurrences of a specified substring within a string with another substring.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.replaceAll(searchValue, replaceValue);
</code></pre>
<ul>
<li><p><code>searchValue</code>: The substring you want to replace.</p>
</li>
<li><p><code>replaceValue</code>: The new substring to replace all occurrences of <code>searchValue</code>.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> originalString = <span class="hljs-string">"Hello, World! Hello!"</span>;
    <span class="hljs-keyword">const</span> replacedString = originalString.replaceAll(<span class="hljs-string">"Hello"</span>, <span class="hljs-string">"Hi"</span>);

    <span class="hljs-built_in">console</span>.log(replacedString);
</code></pre>
<p>    In this example, <code>replaceAll()</code> is used to replace all occurrences of the substring "Hello" with "Hi" in the <code>originalString</code>. The result, <code>replacedString</code>, would be <code>"Hi, World! Hi!"</code>.</p>
<ul>
<li><p><code>toUpperCase():</code></p>
<p>  The <code>toUpperCase()</code> method in JavaScript is used to convert all characters in a string to uppercase. It doesn't modify the original string but returns a new string with all characters converted to uppercase.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">let</span> text = <span class="hljs-string">"hello, world!"</span>;
  <span class="hljs-keyword">let</span> upperCaseText = text.toUpperCase();
  <span class="hljs-built_in">console</span>.log(upperCaseText); <span class="hljs-comment">// Output: "HELLO, WORLD!"</span>
</code></pre>
<p>  In this example, the <code>toUpperCase()</code> method converts the string stored in the <code>text</code> variable to uppercase and stores the result in the <code>upperCaseText</code> variable without altering the original string.</p>
</li>
<li><p><code>toLowerCase():</code></p>
<p>  The <code>toLowerCase()</code> method in JavaScript is used to convert all characters in a string to lowercase. It doesn't modify the original string but returns a new string with all alphabetic characters converted to lowercase.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">let</span> originalString = <span class="hljs-string">"Hello World"</span>;
  <span class="hljs-keyword">let</span> lowerCaseString = originalString.toLowerCase();

  <span class="hljs-built_in">console</span>.log(lowerCaseString); <span class="hljs-comment">// Outputs: "hello world"</span>
</code></pre>
<p>  In this example, <code>toLowerCase()</code> converts the string "Hello World" to an all-lowercase string, creating a new string assigned to the variable <code>lowerCaseString</code>. The original string, <code>originalString</code>, remains unchanged.</p>
</li>
<li><p><code>concat():</code></p>
<p>  The <code>concat()</code> method in JavaScript is used to combine or concatenate two or more strings and return a new string containing the combined text.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.concat(string1, string2, ..., stringN)
</code></pre>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> str1 = <span class="hljs-string">"Hello, "</span>;
  <span class="hljs-keyword">const</span> str2 = <span class="hljs-string">"World!"</span>;
  <span class="hljs-keyword">const</span> combinedString = str1.concat(str2);

  <span class="hljs-built_in">console</span>.log(combinedString); <span class="hljs-comment">// Output: Hello, World!</span>
</code></pre>
<p>  In this example, <code>str1.concat(str2)</code> combines <code>str1</code> and <code>str2</code>, creating a new string <code>combinedString</code> with the text "Hello, World!".</p>
</li>
<li><p><code>trim():</code></p>
<p>  The <code>trim()</code> method in JavaScript removes whitespace (spaces, tabs, line breaks) from both ends of a string.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> str = <span class="hljs-string">"   Hello, World!   "</span>;
  <span class="hljs-keyword">const</span> trimmed = str.trim();
  <span class="hljs-built_in">console</span>.log(trimmed); <span class="hljs-comment">// Outputs: "Hello, World!"</span>
</code></pre>
</li>
<li><p><code>trimStart():</code></p>
<p>  The <code>trimStart()</code> method in JavaScript removes whitespace characters from the beginning (start) of a string and returns the modified string. It does not modify the original string, but instead, it creates and returns a new string with the leading whitespace characters removed.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> str = <span class="hljs-string">"   Hello, world!   "</span>;
  <span class="hljs-keyword">const</span> trimmedString = str.trimStart();

  <span class="hljs-built_in">console</span>.log(trimmedString); <span class="hljs-comment">// Outputs: "Hello, world!   "</span>
</code></pre>
<p>  In this example, <code>trimStart()</code> removes the leading spaces from the original string <code>str</code>, producing the modified string <code>trimmedString</code>.</p>
</li>
<li><p><code>trimEnd():</code></p>
<p>  The <code>trimEnd()</code> method in JavaScript removes whitespace characters from the end (right side) of a string. It returns a new string with the trailing whitespaces removed.</p>
<p>  <strong>Example:</strong></p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> text = <span class="hljs-string">"   Hello, World!    "</span>;
  <span class="hljs-keyword">const</span> trimmedText = text.trimEnd();
  <span class="hljs-built_in">console</span>.log(trimmedText); <span class="hljs-comment">// Output: "   Hello, World!"</span>
</code></pre>
<p>  In this example, <code>trimEnd()</code> removes the whitespace characters at the end of the string <code>text</code>, leaving the leading spaces intact. The resulting <code>trimmedText</code> variable contains the original string without trailing whitespaces.</p>
</li>
<li><p><code>padStart():</code></p>
<p>  The <code>padStart()</code> method is used to pad a string with another string until it reaches a specified length. This padding occurs at the beginning of the string.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.padStart(targetLength, padString)
</code></pre>
<ul>
<li><p><code>targetLength</code>: The final length the string should reach.</p>
</li>
<li><p><code>padString</code> (optional): The string to pad the current string with. If not specified, it pads with spaces.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">let</span> str = <span class="hljs-string">"7"</span>;
    <span class="hljs-keyword">let</span> paddedStr = str.padStart(<span class="hljs-number">4</span>, <span class="hljs-string">"0"</span>);
    <span class="hljs-built_in">console</span>.log(paddedStr); <span class="hljs-comment">// Outputs: "0007"</span>
</code></pre>
<p>    In this example:</p>
<ul>
<li><p>The initial string is "7".</p>
</li>
<li><p><code>padStart(4, "0")</code> is used to pad the string with "0" at the beginning until it reaches a length of 4.</p>
</li>
<li><p>The resulting string is "0007", padded with "0" to reach the length of 4.</p>
</li>
</ul>
<ul>
<li><p><code>padEnd():</code></p>
<p>  The <code>padEnd()</code> method is used to pad the end of a string with a specified character(s) until the resulting string reaches a given length. If the provided string is already equal to or longer than the specified length, it returns the original string.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.padEnd(targetLength [, padString])
</code></pre>
<ul>
<li><p><code>targetLength</code>: The desired length of the resulting string.</p>
</li>
<li><p><code>padString</code> (optional): The string to pad with; it defaults to a space character.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> originalString = <span class="hljs-string">'Hello'</span>;
    <span class="hljs-keyword">const</span> paddedString = originalString.padEnd(<span class="hljs-number">10</span>, <span class="hljs-string">'-'</span>);

    <span class="hljs-built_in">console</span>.log(paddedString); <span class="hljs-comment">// Outputs: "Hello-----"</span>
</code></pre>
<p>    In the example, the <code>padEnd()</code> method pads the original string 'Hello' with hyphens (<code>'-'</code>) until the resulting string reaches a length of 10 characters. The output is <code>"Hello-----"</code> because it added five hyphens to the end of the original string to achieve the specified length.</p>
<ul>
<li><p><code>charAt():</code></p>
<p>  The <code>charAt()</code> method in JavaScript is used to retrieve a single character from a string at a specified index.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.charAt(index)
</code></pre>
<ul>
<li><p><code>string</code> is the string from which to extract the character.</p>
</li>
<li><p><code>index</code> is the position of the character to retrieve. The index is a zero-based value.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> str = <span class="hljs-string">"Hello, World!"</span>;
    <span class="hljs-built_in">console</span>.log(str.charAt(<span class="hljs-number">7</span>)); <span class="hljs-comment">// Output: W</span>
</code></pre>
<p>    In this example, <code>str.charAt(7)</code> retrieves the character at index 7 in the string "Hello, World!", which is 'W'.</p>
<ul>
<li><p><code>charCodeAt():</code></p>
<p>  The <code>charCodeAt()</code> method in JavaScript is used to retrieve the Unicode value of a character at a specified index within a string.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.charCodeAt(index)
</code></pre>
<ul>
<li><p><code>string</code> is the string containing the character.</p>
</li>
<li><p><code>index</code> is the position of the character for which the Unicode value needs to be obtained.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> str = <span class="hljs-string">"Hello"</span>;
    <span class="hljs-built_in">console</span>.log(str.charCodeAt(<span class="hljs-number">0</span>)); <span class="hljs-comment">// Outputs: 72 (Unicode value for 'H')</span>
    <span class="hljs-built_in">console</span>.log(str.charCodeAt(<span class="hljs-number">3</span>)); <span class="hljs-comment">// Outputs: 108 (Unicode value for 'l')</span>
</code></pre>
<p>    In the example, <code>str.charCodeAt(0)</code> retrieves the Unicode value of the character at index 0 in the string "Hello," which is 72, representing the character 'H'. Similarly, <code>str.charCodeAt(3)</code> fetches the Unicode value of the character 'l' at index 3, which is 108.</p>
<ul>
<li><p><code>split():</code></p>
<p>  The <code>split()</code> method in JavaScript is used to split a string into an array of substrings based on a specified separator.</p>
<p>  <strong>Syntax:</strong></p>
<pre><code class="lang-javascript">  string.split(separator, limit)
</code></pre>
<ul>
<li><p><code>string</code>: The original string that will be split.</p>
</li>
<li><p><code>separator</code>: Specifies the character or regular expression used to determine where to split the string. If omitted, the entire string becomes the only element in the resulting array.</p>
</li>
<li><p><code>limit</code> (Optional): A number specifying the maximum number of splits to be found. The remainder of the string is not included in the resulting array if the limit is reached.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-javascript">    <span class="hljs-keyword">const</span> sentence = <span class="hljs-string">"The quick brown fox"</span>;
    <span class="hljs-keyword">const</span> words = sentence.split(<span class="hljs-string">" "</span>); <span class="hljs-comment">// Splits the string into an array of words using space as the separator</span>
    <span class="hljs-built_in">console</span>.log(words); <span class="hljs-comment">// Output: ["The", "quick", "brown", "fox"]</span>
</code></pre>
<p>    In the given example, the <code>split(" ")</code> method splits the <code>sentence</code> string into an array of words using the space character as the separator. The resulting array contains each word as a separate element.</p>
]]></content:encoded></item><item><title><![CDATA[Decoding Network Communication: The OSI Model Simplified]]></title><description><![CDATA[What is the OSI Model?
The OSI (Open Systems Interconnection) model, in simple terms, is like a set of rules that helps different computers and devices talk to each other over a network. It's like a tower with seven floors, and each floor has a speci...]]></description><link>https://blog.sushant.dev/decoding-network-communication-the-osi-model-simplified</link><guid isPermaLink="true">https://blog.sushant.dev/decoding-network-communication-the-osi-model-simplified</guid><category><![CDATA[networking]]></category><category><![CDATA[computer networking]]></category><category><![CDATA[OSI]]></category><category><![CDATA[OSI Model]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sat, 12 Aug 2023 15:44:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691850432926/03c965f5-613a-4c1e-9158-b6231c352792.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-the-osi-model"><strong>What is the OSI Model?</strong></h3>
<p>The OSI (Open Systems Interconnection) model, in simple terms, is like a set of rules that helps different computers and devices talk to each other over a network. It's like a tower with seven floors, and each floor has a specific job to make sure messages travel smoothly from one place to another. Just as you need roads, signs, and traffic rules to drive, the OSI model has layers that handle tasks like sending data, checking for mistakes, and finding the best route for information to reach its destination.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691850626784/b59cbf64-c45d-4776-bd27-951d9795ee6b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-physical-layer"><strong>Physical Layer</strong></h3>
<p>The Physical Layer is the lowest layer in the OSI model. It deals with the actual physical connection between devices and the transmission of raw bits over a physical medium, such as cables, wires, and airwaves. Think of this layer as the foundation upon which all communication is built.</p>
<p><strong>Functions of the Physical Layer:</strong></p>
<ol>
<li><p><strong>Transmission of Raw Bits:</strong> At its core, the Physical Layer's main job is to transmit individual bits (0s and 1s) from one device to another. It doesn't concern itself with the meaning or structure of the data; it focuses solely on moving these binary signals.</p>
</li>
<li><p><strong>Physical Medium Selection:</strong> The Physical Layer is responsible for selecting the appropriate physical medium for transmitting data. This could be twisted-pair copper cables, fiber-optic cables, wireless radio waves, or even infrared signals.</p>
</li>
<li><p><strong>Data Encoding and Signaling:</strong> Before data can be sent over a physical medium, it needs to be converted into a format that the medium can handle. Different encoding schemes are used to represent binary data using variations in voltage, frequency, or other physical properties of the medium.</p>
</li>
<li><p><strong>Bit Rate and Bandwidth:</strong> The Physical Layer determines the speed at which data is transmitted, known as the bit rate. It also defines the bandwidth, which is the range of frequencies that the medium can carry. Higher bandwidth allows for more data to be transmitted simultaneously.</p>
</li>
<li><p><strong>Synchronization:</strong> To ensure that the sender and receiver stay in sync, the Physical Layer handles synchronization by adding start and stop bits to each data unit. These bits help both ends identify the beginning and end of a transmission.</p>
</li>
<li><p><strong>Topology and Connectors:</strong> The Physical Layer also deals with the physical layout of devices on a network, known as the topology. It defines how devices are connected and how data flows. Different connectors (like RJ45 for Ethernet cables) are used to ensure proper physical connections.</p>
</li>
<li><p><strong>Transmission Modes:</strong> The Physical Layer defines how data is transmitted between devices. This can be done in different modes, such as simplex (one-way), half-duplex (both ways but not simultaneously), or full-duplex (both ways simultaneously).</p>
</li>
</ol>
<p><strong>Real-Life Example:</strong></p>
<p>Imagine you're sending a handwritten letter to a friend. In this analogy, the Physical Layer corresponds to the actual paper, ink, and the postal service that physically carries the letter. The paper represents the medium, the ink the encoding, and the postal service ensures the letter's transmission. Just as you need a clear path for the letter to reach your friend, devices on a network need proper physical connections and mediums for data to travel.</p>
<p>In essence, the Physical Layer lays the groundwork for all communication in a network. Without this layer, devices wouldn't have a way to communicate directly, just as letters couldn't reach their destinations without roads, vehicles, and postal services.</p>
<h3 id="heading-data-link-layer"><strong>Data Link Layer</strong></h3>
<p>The Data-link layer is the second layer from the bottom in the <strong>OSI</strong> (Open System Interconnection) network architecture model. It is responsible for the node-to-node delivery of data. Its major role is to ensure error-free transmission of information. DLL is also responsible to encode, decode and organize the outgoing and incoming data. This is considered the most complex layer of the OSI model as it hides all the underlying complexities of the hardware from the other above layers. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691852095084/01dc0671-3b94-400a-95d1-ff99ef19563d.png" alt class="image--center mx-auto" /></p>
<p><strong>Functions of the Data Link Layer:</strong></p>
<ol>
<li><p><strong>Framing:</strong> The Data Link Layer takes the stream of bits from the Physical Layer and divides it into manageable chunks called frames. Each frame contains a specific amount of data along with control information for error detection and flow control.</p>
</li>
<li><p><strong>MAC Addressing:</strong> Every device on a network has a unique Media Access Control (MAC) address. The Data Link Layer uses these addresses to identify the source and destination of data frames. This ensures that data is delivered to the correct recipient within the same local network.</p>
</li>
<li><p><strong>Error Detection and Correction:</strong> The Data Link Layer is responsible for detecting and, in some cases, correcting errors that may occur during transmission. Various techniques, such as parity checks and cyclic redundancy checks (CRC), are employed to verify the integrity of data frames.</p>
</li>
<li><p><strong>Flow Control:</strong> To prevent overwhelming the receiving device with data, the Data Link Layer manages the flow of data between sender and receiver. This layer ensures that data is transmitted at a pace the receiving device can handle.</p>
</li>
<li><p><strong>Media Access Control:</strong> In shared network environments, where multiple devices compete for access to the same communication medium (like Ethernet), the Data Link Layer employs protocols for media access control. This ensures fair and efficient sharing of the medium.</p>
</li>
<li><p><strong>Logical Link Control:</strong> The Logical Link Control (LLC) sublayer of the Data Link Layer handles flow control and error handling at the logical level. It manages the communication between devices, establishing and terminating connections as needed.</p>
</li>
</ol>
<h3 id="heading-network-layer"><strong>Network Layer</strong></h3>
<p>The Network Layer, the third layer in the OSI model, is responsible for enabling communication between devices on different networks. This layer focuses on routing data packets from the source to the destination through various interconnected networks, regardless of their physical locations.</p>
<p><strong>Functions of the Network Layer:</strong></p>
<ol>
<li><p><strong>Logical Addressing:</strong> While the Data Link Layer uses MAC addresses for communication within a local network, the Network Layer introduces logical addressing. Devices are assigned unique IP (Internet Protocol) addresses, which identify them globally on the internet or within larger networks.</p>
</li>
<li><p><strong>Routing:</strong> The Network Layer handles the complex task of choosing the best path for data packets to travel from the source to the destination. This involves considering factors like the distance, network congestion, and potential failures.</p>
</li>
<li><p><strong>Packet Forwarding:</strong> Once the optimal route is determined, the Network Layer's routers forward data packets hop by hop, making sure they arrive at their intended destinations.</p>
</li>
<li><p><strong>Fragmentation and Reassembly:</strong> Data packets might need to traverse different network types with varying maximum sizes. The Network Layer can break down larger packets into smaller fragments for transmission and then reassemble them at the receiving end.</p>
</li>
<li><p><strong>Logical Subnetting:</strong> The Network Layer enables subnetting, which involves dividing a larger network into smaller segments, each with its own range of IP addresses. This enhances network management and organization.</p>
</li>
<li><p><strong>Quality of Service (QoS):</strong> The Network Layer can prioritize certain types of data traffic, ensuring that time-sensitive applications like VoIP and video streaming receive a smoother experience.</p>
</li>
</ol>
<p><strong>Why is the Network Layer Essential?</strong></p>
<p>Imagine you're planning a road trip across different cities. The Network Layer is like your GPS system:</p>
<ul>
<li><p><strong>Logical Addressing:</strong> Just as you input the destination's address into your GPS, devices use IP addresses to identify where data packets need to go.</p>
</li>
<li><p><strong>Routing:</strong> Your GPS calculates the fastest route by considering road conditions, traffic, and distance. The Network Layer similarly evaluates routes to determine the best path for data.</p>
</li>
<li><p><strong>Packet Forwarding:</strong> As you drive, the GPS guides you through each turn. Routers in the Network Layer direct data packets towards their destinations, making sure they reach the right stops along the way.</p>
</li>
<li><p><strong>Fragmentation and Reassembly:</strong> On highways with different speed limits, you might slow down and speed up accordingly. Similarly, the Network Layer adjusts packet sizes to fit different network constraints.</p>
</li>
<li><p><strong>Logical Subnetting:</strong> In a road trip, you might visit different regions. The Network Layer's subnetting divides networks into smaller segments, aiding efficient data management.</p>
</li>
<li><p><strong>QoS:</strong> If you prioritize sightseeing over shopping, your GPS can adjust the route. Likewise, the Network Layer prioritizes data types based on their importance.</p>
</li>
</ul>
<h3 id="heading-transport-layer"><strong>Transport Layer</strong></h3>
<p>The Transport Layer, situated above the Network Layer, is responsible for managing end-to-end communication and ensuring the reliable delivery of data between devices. This layer takes the data from higher layers and breaks it down into manageable segments for transmission, all while maintaining the integrity and order of the information.</p>
<p><strong>Functions of the Transport Layer:</strong></p>
<ol>
<li><p><strong>Segmentation and Reassembly:</strong> The Transport Layer divides the data received from upper layers into smaller segments that are easier to manage. These segments are then transmitted independently and reassembled at the destination to reconstruct the original message.</p>
</li>
<li><p><strong>Flow Control:</strong> To prevent overwhelming the receiving device with more data than it can handle, the Transport Layer manages the flow of data. It ensures that the sender transmits data at a pace that matches the receiver's capacity.</p>
</li>
<li><p><strong>Error Detection and Correction:</strong> While the lower layers handle basic error detection, the Transport Layer adds an additional layer of error checking. It uses mechanisms like checksums to confirm that data has been transmitted correctly and requests retransmission if errors are detected.</p>
</li>
<li><p><strong>Multiplexing and Demultiplexing:</strong> When multiple applications on a device are using the network simultaneously, the Transport Layer assigns unique identifiers (port numbers) to each application. This enables proper sorting and delivery of data at the receiving end.</p>
</li>
<li><p><strong>Connection Establishment and Termination:</strong> For communication to take place, the Transport Layer establishes a connection between sender and receiver. It defines how the devices will exchange data and ensures a smooth termination of the connection when communication is complete.</p>
</li>
</ol>
<p><strong>Importance of the Transport Layer:</strong></p>
<p>Imagine you're sending a package through a shipping service. The Transport Layer is analogous to the processes involved:</p>
<ul>
<li><p><strong>Segmentation:</strong> Just as your package might be too large to send in one piece, the Transport Layer divides the data into smaller segments for easier transmission.</p>
</li>
<li><p><strong>Flow Control:</strong> If you were sending packages to a friend's mailbox, you wouldn't overwhelm the mailbox with too many packages at once. Similarly, the Transport Layer ensures data is sent at a manageable rate.</p>
</li>
<li><p><strong>Error Detection:</strong> If a package arrives damaged or incomplete, the shipping service will ask you to confirm its condition. Likewise, the Transport Layer uses checksums to verify data integrity and request retransmission if necessary.</p>
</li>
<li><p><strong>Multiplexing:</strong> Think of your friend receiving packages from different senders. Each package is labeled with the sender's name, allowing your friend to know who sent each package. Similarly, the Transport Layer uses port numbers to route data to the correct application.</p>
</li>
<li><p><strong>Connection Management:</strong> When you arrange a package delivery, you provide details like the sender's address, recipient's address, and desired delivery date. The Transport Layer establishes a similar "conversation" between devices.</p>
</li>
</ul>
<h3 id="heading-session-layer"><strong>Session Layer</strong></h3>
<p>The Session Layer, located between the Transport Layer and the Presentation Layer, focuses on establishing, managing, and terminating communication sessions between devices. It ensures that data exchange is organized, synchronized, and reliable, resembling a manager overseeing a conversation between two parties.</p>
<p><strong>Functions of the Session Layer:</strong></p>
<ol>
<li><p><strong>Session Establishment:</strong> Before data exchange begins, the Session Layer establishes a session between the sender and receiver. This involves setting up the rules for communication, such as who gets to speak when.</p>
</li>
<li><p><strong>Dialog Control:</strong> During a session, the Session Layer manages the flow of conversation. It controls which party has the "floor" at a given time and ensures that they take turns without interrupting each other.</p>
</li>
<li><p><strong>Synchronization:</strong> Imagine a virtual hand raising during a video call to request speaking time. The Session Layer helps maintain synchronization between devices to prevent confusion and ensure smooth communication.</p>
</li>
<li><p><strong>Data Segmentation and Reassembly:</strong> Like splitting a long story into chapters, the Session Layer can segment large amounts of data into smaller, manageable pieces for transmission. It then ensures that these pieces are reassembled correctly at the receiving end.</p>
</li>
<li><p><strong>Session Termination:</strong> When the conversation is complete, the Session Layer manages the graceful termination of the session. This ensures that resources are properly released, and both parties are aware that the communication has ended.</p>
</li>
</ol>
<p><strong>Importance of the Session Layer:</strong></p>
<p>Think of a video conference call where participants take turns speaking and avoid talking over each other. The Session Layer serves a similar purpose:</p>
<ul>
<li><p><strong>Session Establishment:</strong> Just as you send an invitation for a video call, the Session Layer sets up the rules and protocols for communication.</p>
</li>
<li><p><strong>Dialog Control:</strong> During the call, the Session Layer ensures that participants speak one at a time, preventing confusion and chaos.</p>
</li>
<li><p><strong>Synchronization:</strong> Like synchronized dancing, the Session Layer keeps devices "in step" to maintain a coherent conversation.</p>
</li>
<li><p><strong>Data Segmentation:</strong> In a long conversation, you might divide topics into sections. Similarly, the Session Layer can break down large data into manageable segments.</p>
</li>
<li><p><strong>Session Termination:</strong> When the call ends, the Session Layer ensures a smooth conclusion, so participants know when to hang up.</p>
</li>
</ul>
<h3 id="heading-presentation-layer"><strong>Presentation Layer</strong></h3>
<p>The Presentation Layer is the sixth layer of the OSI model, sitting between the Session Layer and the Application Layer. It focuses on ensuring that data is presented in a way that is understandable, secure, and efficient for communication between devices on a network.</p>
<h4 id="heading-functions-of-presentation-layer"><strong>Functions of Presentation Layer:</strong></h4>
<ol>
<li><p><strong>Data Translation:</strong> The Presentation Layer handles the translation of data between different formats. This translation ensures that devices with different ways of representing data can still communicate effectively. It takes data received from the Application Layer and prepares it for transmission.</p>
</li>
<li><p><strong>Data Encryption:</strong> Encryption is a critical function within the Presentation Layer. It involves converting plain data into a scrambled, unreadable form using cryptographic techniques. This ensures that even if unauthorized individuals access the data, they cannot understand its content without the appropriate decryption key.</p>
</li>
<li><p><strong>Data Compression:</strong> The Presentation Layer compresses data to reduce its size for more efficient transmission and storage. By removing redundancies and unnecessary bits from the data, compression optimizes network bandwidth usage and enhances data transfer speeds.</p>
</li>
</ol>
<h3 id="heading-application-layer"><strong>Application Layer</strong></h3>
<p>The Application Layer, the top layer of the OSI model, is where all the magic happens when you interact with software and services on your devices. It's like the front door to your digital world, allowing you to use various applications, access websites, send emails, and do so much more!</p>
<h4 id="heading-functions-of-application-layer">Functions of <strong>Application Layer</strong>:</h4>
<ol>
<li><p><strong>User Interface:</strong> The Application Layer provides the interface that you interact with directly. It's the screen you see, the buttons you click, and the way you give commands to applications.</p>
</li>
<li><p><strong>Application Services:</strong> This layer offers a wide range of services and functions that apps use to perform tasks. Whether you're browsing the web, checking emails, or chatting with friends, these services make it all possible.</p>
</li>
<li><p><strong>Network Communication:</strong> Applications need to communicate with other devices or servers over the network. The Application Layer handles this communication, ensuring that data gets from your device to its destination and back.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Network Topology]]></title><description><![CDATA[In the world of computer networks, the arrangement of devices and their interconnections is known as network topology. The choice of network topology greatly influences how data and information flow within an organization or system. Understanding the...]]></description><link>https://blog.sushant.dev/network-topology</link><guid isPermaLink="true">https://blog.sushant.dev/network-topology</guid><category><![CDATA[#Topology]]></category><category><![CDATA[networking]]></category><category><![CDATA[Computer Science]]></category><category><![CDATA[#computernetwork ]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Wed, 09 Aug 2023 08:45:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691570471563/a44c3db9-f16f-45b0-8779-bc4444b7d839.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of computer networks, the arrangement of devices and their interconnections is known as network topology. The choice of network topology greatly influences how data and information flow within an organization or system. Understanding the various types of network topologies, along with their advantages, disadvantages, and real-world examples, can help businesses and individuals make informed decisions when designing and implementing their network infrastructure.</p>
<h2 id="heading-types-of-network-topology">Types of Network Topology</h2>
<p>The arrangement of a network that comprises nodes and connecting lines via sender and receiver is referred to as <strong>Network Topology</strong>. The various network topologies are:</p>
<ul>
<li><p>Point to Point Topology</p>
</li>
<li><p>Mesh Topology</p>
</li>
<li><p>Star Topology</p>
</li>
<li><p>Bus Topology</p>
</li>
<li><p>Ring Topology</p>
</li>
<li><p>Tree Topology</p>
</li>
<li><p>Hybrid Topology</p>
</li>
</ul>
<h3 id="heading-point-to-point-topology"><strong>Point-to-Point Topology</strong></h3>
<p>Point-to-Point topology, also known as a direct connection, involves a dedicated link between two devices. It's simple and efficient for connecting two locations directly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691568334600/9432f0a1-2041-4a2c-aeee-ee01faa0cdcc.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Direct communication between connected devices.</p>
</li>
<li><p>Minimal chances of data collisions.</p>
</li>
<li><p>Suitable for connecting remote locations.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Scalability is limited to individual connections.</p>
</li>
<li><p>Not suitable for larger networks due to the complexity of maintaining multiple dedicated links.</p>
</li>
</ul>
<p>Applications: Connecting two branch offices over a dedicated leased line, or linking a computer to a printer.</p>
<h3 id="heading-mesh-topology"><strong>Mesh Topology</strong></h3>
<p>Mesh topology involves every device being connected to every other device in the network. This arrangement offers high redundancy and fault tolerance, as multiple paths exist for data to travel.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691568486186/09057c3f-b2c4-435d-a2c8-58bcc346ea44.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>High reliability due to redundancy; network remains operational even if some devices fail.</p>
</li>
<li><p>Can handle heavy traffic and high data loads.</p>
</li>
<li><p>Scalable by adding more devices without affecting network performance.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Complex to install and manage, especially as the number of devices increases.</p>
</li>
<li><p>Requires a significant amount of cabling, which can be costly and time-consuming.</p>
</li>
<li><p>Difficult to troubleshoot due to the multitude of connections.</p>
</li>
</ul>
<p>Example: Military and critical communication networks, where uninterrupted connectivity and redundancy are essential.</p>
<h3 id="heading-star-topology"><strong>Star Topology</strong></h3>
<p>In a star topology, all devices are connected to a central hub or switch. Each device has its own dedicated connection to the central hub, ensuring that the failure of one device does not affect the entire network.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691568649577/9959c4d2-299d-49aa-8332-51cfd687c184.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Easy to manage and troubleshoot.</p>
</li>
<li><p>Isolation of devices prevents network-wide disruptions.</p>
</li>
<li><p>Scalable, as adding new devices is straightforward.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Dependency on the central hub; its failure leads to the entire network's disruption.</p>
</li>
<li><p>Requires more cabling than a bus topology.</p>
</li>
<li><p>The central hub represents a potential single point of failure.</p>
</li>
</ul>
<p>Example: Most modern home and office networks, where computers, printers, and other devices connect to a central router or switch.</p>
<h3 id="heading-bus-topology">Bus Topology</h3>
<p>Bus topology is one of the simplest and most straightforward network configurations. In this setup, all devices are connected to a central cable, known as a bus. While it was popular in the past, its use has decreased due to the emergence of more advanced topologies.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691569206072/fcf8e712-8843-416c-ba49-380579401784.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Easy to set up and cost-effective for small networks.</p>
</li>
<li><p>Requires less cable length compared to some other topologies.</p>
</li>
<li><p>Well-suited for small organizations with limited resources.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>The entire network can be affected if the main cable fails or is damaged.</p>
</li>
<li><p>Performance degrades as more devices are added, leading to potential bottlenecks.</p>
</li>
<li><p>Difficult to troubleshoot connectivity issues.</p>
</li>
</ul>
<p>Example: Small office or home network with a few computers connected through a single cable.</p>
<h3 id="heading-ring-topology">Ring Topology</h3>
<p>In a ring topology, each device is connected to exactly two other devices, forming a closed loop. Data travels in one direction around the ring, passing through each device until it reaches its destination.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691569173334/5e8a2c96-c7f8-41dd-a1f9-0964a36db5aa.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Data travels efficiently in a single direction, reducing collisions.</p>
</li>
<li><p>Well-suited for networks with consistent traffic flow.</p>
</li>
<li><p>Suitable for smaller networks and fewer devices.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Failure of a single device can disrupt the entire network.</p>
</li>
<li><p>Adding or removing devices can be complicated and may require network downtime.</p>
</li>
<li><p>Maintenance and troubleshooting can be challenging.</p>
</li>
</ul>
<p>Example: Token Ring networks, which were popular in the past, used a ring topology for connecting devices.</p>
<h3 id="heading-tree-topology">Tree Topology</h3>
<p>Tree topology is a hierarchical structure that combines characteristics of star and bus topologies. It features multiple star-configured networks connected to a central bus backbone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691569203051/3a88b90c-adde-4f5a-8e12-d477edd33af4.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Scalable; can accommodate a large number of devices.</p>
</li>
<li><p>Centralized control for segments simplifies management.</p>
</li>
<li><p>Redundancy possible by connecting secondary hubs to the primary hub.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Dependency on the central hub; failure affects connected segments.</p>
</li>
<li><p>If the central backbone fails, the entire network is affected.</p>
</li>
<li><p>Complex to set up and maintain due to multiple segments.</p>
</li>
</ul>
<p>Example: Large corporate networks often use tree topology, where departments or floors are connected as star-configured segments.</p>
<h3 id="heading-hybrid-topology">Hybrid Topology</h3>
<p>Hybrid topology is a combination of two or more different topologies. For instance, a network might combine star and ring topologies to benefit from their individual strengths.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691570908238/99f995f1-dd25-4b35-ae62-c626cb24b8ad.png" alt class="image--center mx-auto" /></p>
<p>Advantages:</p>
<ul>
<li><p>Offers the benefits of multiple topologies, tailoring the network to specific needs.</p>
</li>
<li><p>Reduces the limitations of individual topologies.</p>
</li>
<li><p>Enhanced flexibility and adaptability.</p>
</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li><p>Complexity increases as multiple topologies are integrated.</p>
</li>
<li><p>Requires careful planning and management.</p>
</li>
<li><p>Possibility of higher costs due to the combination of hardware and cabling.</p>
</li>
</ul>
<p>Example: A larger organization might use a combination of star and mesh topologies, creating a reliable and scalable network infrastructure.</p>
<h2 id="heading-physical-topology-vs-logical-topology"><strong>Physical Topology Vs Logical Topology</strong></h2>
<h3 id="heading-physical-topology"><strong>Physical Topology</strong></h3>
<p>Think of physical topology as the "real-world" layout of devices and cables in a network. Imagine you have a group of computers, printers, and other devices that need to be connected to share information. How are these devices physically connected to each other? That's where physical topology comes into play.</p>
<p>Physical topology is like arranging these devices on a map. Imagine drawing lines to show how they're connected with actual cables. There are different ways you can arrange these cables and devices, and each way has its own benefits and drawbacks. Some common physical topologies are:</p>
<ul>
<li><p><strong>Star Topology:</strong> Imagine all devices connected to a central hub, just like spokes of a bicycle wheel connected to the center. This central hub helps devices communicate with each other.</p>
</li>
<li><p><strong>Bus Topology:</strong> Picture a main cable, and all devices are connected to it like houses along a street. Information travels along this cable, and devices receive what's meant for them.</p>
</li>
<li><p><strong>Ring Topology:</strong> Envision devices forming a circle. Each device is connected to two neighbors, and data travels around the circle in one direction.</p>
</li>
</ul>
<h3 id="heading-logical-topology"><strong>Logical Topology</strong></h3>
<p>Now, let's think about how devices communicate in a network in a more abstract way. Logical topology focuses on how data flows between devices, regardless of their physical arrangement. Imagine you're sending messages between these devices, even though you might not know the exact cables they're connected with.</p>
<p>Logical topology is like drawing a map that shows the paths data takes as it travels from one device to another. It doesn't matter if the devices are physically connected in a star, bus, or ring setup; what matters is the route data follows. Some common logical topologies are:</p>
<ul>
<li><p><strong>Mesh Topology:</strong> Think of creating connections between all devices, so there are multiple paths for data to take. If one path is blocked, data can find another way.</p>
</li>
<li><p><strong>Star Topology (Logical):</strong> Imagine you're sending a message from one device to another through a central hub, even if the devices aren't physically connected that way.</p>
</li>
<li><p><strong>Bus Topology (Logical):</strong> Picture data traveling along a main path and devices receiving what's intended for them, even if they're not connected in a straight line physically.</p>
</li>
</ul>
<p>Remember, physical topology is about the actual cables and devices' layout, while logical topology focuses on how data moves between devices, regardless of their physical arrangement. Both aspects are important when designing and understanding computer networks.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring Computer Networks: Types and Components]]></title><description><![CDATA[Types of Computer Networks

Personal Area Network (PAN):
 A Personal Area Network (PAN) is the smallest type of network and typically involves devices within your immediate reach. It allows communication and data sharing between personal devices.
 Ex...]]></description><link>https://blog.sushant.dev/exploring-computer-networks-types-and-components</link><guid isPermaLink="true">https://blog.sushant.dev/exploring-computer-networks-types-and-components</guid><category><![CDATA[Computer Science]]></category><category><![CDATA[networking]]></category><category><![CDATA[computer networking]]></category><category><![CDATA[computer network]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Mon, 07 Aug 2023 15:13:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691309212869/c3952008-9595-49c3-998a-5fb1d4050781.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-types-of-computer-networks"><strong>Types of Computer Networks</strong></h3>
<ol>
<li><p><strong>Personal Area Network (PAN):</strong></p>
<p> A Personal Area Network (PAN) is the smallest type of network and typically involves devices within your immediate reach. It allows communication and data sharing between personal devices.</p>
<p> Example: When you connect your smartphone to wireless earphones or a smartwatch using Bluetooth, you're setting up a PAN. This network enables seamless communication and data exchange between your personal gadgets.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311269232/08ff741a-122c-49f5-a73a-245cb62433c5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Local Area Network (LAN):</strong></p>
<p> A Local Area Network (LAN) connects devices within a limited area, such as a home, office, or school building. LANs are prevalent and essential for sharing resources and information within a close-knit environment.</p>
<p> Example: In your home, if you have multiple devices like computers, laptops, smartphones, and a printer all connected to the same Wi-Fi router, they form a LAN. This setup allows you to share files, access the internet, and use the printer from any connected device.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311372219/ebbde2a8-dcb2-40af-8102-0ad462442183.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Wireless Local Area Network (WLAN):</strong></p>
<p> A Wireless Local Area Network (WLAN) is a variation of LAN that uses wireless technology (Wi-Fi) to connect devices to the network, eliminating the need for physical cables.</p>
<p> Example: Imagine you visit a café with free Wi-Fi. When you connect your laptop or smartphone to the café's Wi-Fi network, you're accessing their WLAN. Now, you can browse the internet and stay connected without any physical connections.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311424738/66b20e71-8686-42bc-84f9-8eabf66f8ed6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Metropolitan Area Network (MAN):</strong></p>
<p> A Metropolitan Area Network (MAN) is larger than a LAN and connects multiple LANs within a city or metropolitan area. It allows data sharing and communication over a broader geographical range.</p>
<p> Example: In a university campus, each department may have its LAN. These LANs can be connected through a MAN, enabling students and staff to access resources and communicate across the entire campus.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311465248/984bf9e0-d801-4809-a70c-b147e3277ca6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Wide Area Network (WAN):</strong></p>
<p> A Wide Area Network (WAN) spans a vast geographical area, connecting multiple networks, including LANs and MANs, across cities, states, or even countries. WANs enable communication over long distances.</p>
<p> Example: Consider a multinational company with offices in different countries. The company's WAN allows employees in different locations to collaborate, share data, and access centralized resources as if they were in the same office.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311510363/d4cdb373-54df-41ab-97c8-703c76e92cea.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Campus Area Network (CAN):</strong></p>
<p> A Campus Area Network (CAN) connects multiple LANs within a larger educational or corporate campus. CANs facilitate efficient data transfer and communication between various buildings or departments on the campus.</p>
<p> Example: A university campus may have separate LANs in each college or school building. The CAN connects these LANs, allowing students, faculty, and staff to access resources across the entire campus easily.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311552297/dfad3a3c-5fb0-4f9e-819b-5a98eb268de6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Virtual Private Network (VPN):</strong></p>
<p> A Virtual Private Network (VPN) is a secure network that enables users to access a private network over a public network (usually the internet). VPNs ensure data privacy and encryption, making them ideal for remote access and secure communication.</p>
<p> Example: If you work from home and need to access your company's internal files and resources securely, you can use a VPN. The VPN encrypts your data, ensuring that sensitive information remains protected while you work remotely.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691311610613/95613eb8-c3a6-441d-a5dc-7933e2cde635.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-what-is-internetworking">What is Internetworking?</h3>
<p>Internetworking refers to the process of connecting multiple individual computer networks together to form a larger network, allowing devices from different networks to communicate and share information. It is the foundation of the Internet and other interconnected systems that enable global communication.</p>
<p>Example: Consider a scenario where three separate offices in different locations have their own Local Area Networks (LANs). Internetworking would involve connecting these LANs through routers or switches to create a Wide Area Network (WAN). This interconnected WAN would enable seamless communication between employees in all three offices, allowing them to share files, access resources, and collaborate effectively as if they were all part of a single network.</p>
<h3 id="heading-difference-between-internet-intranet-and-extranet"><strong>Difference between Internet, Intranet and Extranet</strong></h3>
<p><strong>Internet:</strong></p>
<ul>
<li><p>The Internet is like a gigantic global network that connects computers and devices all around the world.</p>
</li>
<li><p>It's a place where you can browse websites, send emails, watch videos, and do many other things.</p>
</li>
<li><p>Anyone with an internet connection can access the Internet.</p>
</li>
<li><p>Example: When you search for information on Google or chat with friends on social media, you're using the Internet.</p>
</li>
</ul>
<p><strong>Intranet:</strong></p>
<ul>
<li><p>An intranet is like a private network just for a specific group, like people who work at the same company or go to the same school.</p>
</li>
<li><p>It's a safe and private space where people within that group can share files, send messages, and work together.</p>
</li>
<li><p>Only authorized members can use an intranet, and it's not accessible from the outside.</p>
</li>
<li><p>Example: If you work at a big company, the intranet might have important documents, schedules, and a way to talk to your colleagues.</p>
</li>
</ul>
<p><strong>Extranet:</strong></p>
<ol>
<li><p>An extranet is a bit like a special guest area within an intranet. It lets certain people from outside the group access specific parts of the network.</p>
</li>
<li><p>It's useful for sharing information with partners, clients, or customers in a secure way.</p>
</li>
<li><p>Just like the intranet, you need permission to access an extranet, and not everyone can get in.</p>
</li>
<li><p>Example: Imagine you're a customer who wants to track your online orders. The company might have an extranet where you can log in and see the status of your orders without seeing all their internal stuff.</p>
</li>
</ol>
<h3 id="heading-protocol">Protocol</h3>
<p>A <strong>protocol</strong> is simply defined as a set of rules and regulations for data communication. Rules are basically defined for each and every step and process at time of communication among two or more computers. Networks are needed to follow these protocols to transmit the data successfully. All protocols might be implemented using hardware, software, or combination of both of them. There are three aspects of protocols given below :</p>
<ul>
<li><p><strong>Syntax –</strong> It is used to explain data format that is needed to be sent or received.</p>
</li>
<li><p><strong>Semantics –</strong> It is used to explain exact meaning of each of sections of bits that are usually transferred.</p>
</li>
<li><p><strong>Timings –</strong> It is used to explain exact time at which data is generally transferred along with speed at which it is transferred.</p>
</li>
</ul>
<h3 id="heading-network-devices">Network Devices</h3>
<p><strong>Network Devices:</strong> Network devices, also known as networking hardware, are physical devices that allow hardware on a computer network to communicate and interact with one another. For example Repeater, Hub, Bridge, Switch, Routers, Gateway, Brouter, and NIC, etc.</p>
<ol>
<li><p><strong>Hub:</strong> Imagine a hub as a central meeting place where all your friends gather to share news. In a network, a hub is a device that brings different computers together, but it's not very smart. When one computer sends a message, the hub sends it to all other computers, whether they want it or not. Hubs are like shouting in a room where everyone hears, even if the message isn't meant for them. Hubs aren't used much anymore because they can cause a lot of unnecessary network traffic.</p>
</li>
<li><p><strong>Repeater:</strong> A repeater is like an echo chamber. It listens to a weak signal from one side and then repeats it with more power on the other side. This helps to extend the distance a signal can travel in a network. Think of it as a relay runner in a race – the original runner passes the baton to the repeater, who runs a bit more and then passes the signal to the next runner.</p>
</li>
<li><p><strong>Bridge:</strong> A bridge is like a translator between two groups of friends who speak different languages. In a network, a bridge connects two smaller networks together, making them work as if they're one big network. It pays attention to the messages going back and forth and only lets the necessary ones through. It's like letting only the important parts of a conversation between different language speakers be heard.</p>
</li>
<li><p><strong>Switch:</strong> A switch is like a smart postman who knows exactly where to deliver mail. In a network, a switch connects many computers and only sends a message to the specific computer it's meant for. It's like having personal mailboxes for each friend, so the postman doesn't have to shout messages to everyone. Switches make networks faster and more efficient.</p>
</li>
<li><p><strong>Router:</strong> A router is like a GPS for data on the internet. When you send something online, a router figures out the best path to get it to the right place. It's like a travel planner for your data packets. Routers also keep your home network separate from the outside world to keep your data safe.</p>
</li>
<li><p><strong>Gateway:</strong> A gateway is like a door between two different neighborhoods. It connects two different types of networks, helping them communicate even if they use different rules. Imagine translating a conversation between people who speak different languages – that's what a gateway does for different kinds of networks.</p>
</li>
<li><p><strong>Brouter:</strong> A brouter is a bit like a bilingual friend who can understand and speak two languages fluently. It combines the features of a bridge and a router. If it gets a message meant for a computer on the same network, it acts like a bridge and sends it directly. If the message is for a different network, it acts like a router and forwards it.</p>
</li>
<li><p><strong>NIC (Network interface card):</strong> A NIC is like a Communication helper. It's a card that goes inside your computer and helps it talk to the network. Just like how you need a phone to talk to someone, your computer needs a NIC to talk to other computers on a network.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Basics of Computer Network]]></title><description><![CDATA[Welcome to the fascinating world of computer networks! In this blog, we'll embark on a journey to explore the basics of computer networks and understand how they enable us to connect and communicate in the digital age. Whether you're a tech enthusias...]]></description><link>https://blog.sushant.dev/basics-of-computer-network</link><guid isPermaLink="true">https://blog.sushant.dev/basics-of-computer-network</guid><category><![CDATA[networking]]></category><category><![CDATA[internet]]></category><category><![CDATA[computer networking]]></category><category><![CDATA[computer network]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Sat, 05 Aug 2023 10:48:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691212198381/79a2514d-c031-4926-97b1-58acd71ff932.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the fascinating world of computer networks! In this blog, we'll embark on a journey to explore the basics of computer networks and understand how they enable us to connect and communicate in the digital age. Whether you're a tech enthusiast, a student, or simply curious about how the internet works, this beginner's guide will break down the essential concepts with easy-to-understand examples.</p>
<h3 id="heading-what-is-a-computer-network"><strong>What is a Computer Network?</strong></h3>
<p>A computer network is a collection of interconnected devices, such as computers, servers, printers, smartphones, and more, that can share data and resources with each other. Imagine it as a highway system that allows information to flow from one device to another, creating a web of connections that power our modern world.</p>
<h3 id="heading-types-of-computer-networks"><strong>Types of Computer Networks:</strong></h3>
<ol>
<li><p><strong>Local Area Network (LAN):</strong> A LAN is a network that covers a small geographical area, such as a home, office, or school. It allows devices within this limited area to share resources and communicate with each other directly. For instance, your home Wi-Fi network is a type of LAN. Whenever you stream a video from your computer to a smart TV, you're utilizing a local network.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691231900393/98800cd4-83c2-41f6-b67e-15beb8c1a754.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Wide Area Network (WAN):</strong> A WAN, on the other hand, spans a larger geographical area, connecting devices over long distances. The internet is the most prominent example of a WAN. When you send an email to a friend on the other side of the world or access a website hosted in a different country, you're utilizing the power of WANs.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1691231963792/ac9ad4cb-1af6-45c3-83c4-aa754a0f7a86.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-basic-components-of-a-computer-network"><strong>Basic Components of a Computer Network:</strong></h3>
<ol>
<li><p><strong>Nodes:</strong> Nodes are the devices connected to the network. They can be computers, laptops, smartphones, servers, printers, or any other device capable of sending or receiving data.</p>
</li>
<li><p><strong>Links:</strong> Links are the communication channels that connect nodes within a network. These channels can be wired (like Ethernet cables) or wireless (like Wi-Fi or Bluetooth).</p>
</li>
<li><p><strong>Switches and Routers:</strong> Switches and routers are essential network devices that help manage the flow of data. A switch connects devices within a local network, while a router connects different networks together. They act as traffic directors, ensuring that data reaches its intended destination.</p>
</li>
</ol>
<h3 id="heading-example-time"><strong>Example Time:</strong></h3>
<p>Let's imagine a small office network as an example. In this office, there are five computers (nodes) connected to a switch. The switch acts as the central hub and allows these computers to share files and resources, like a shared printer. When one employee wants to print a document from their computer, the data flows through the switch to the printer, and the document is printed. In this scenario, the switch is a vital component of the LAN, allowing seamless communication and resource-sharing.</p>
<p><strong>Data Transmission:</strong></p>
<p>Data is transmitted through networks using packets. Imagine a packet as a parcel containing a part of your message. When you send an email or visit a website, the data is divided into multiple packets, each containing a specific portion of the message. These packets travel independently across the network, taking different routes to reach the destination. Once they all arrive, the receiving device reassembles them to recreate the original data.</p>
<h3 id="heading-protocols"><strong>Protocols:</strong></h3>
<p>Protocols are a set of rules and conventions that govern how data is transmitted and received over a network. One well-known example is the Transmission Control Protocol (TCP) and Internet Protocol (IP) combination, commonly referred to as TCP/IP. These protocols ensure reliable and orderly data transmission across the internet.</p>
<h3 id="heading-internet-and-beyond"><strong>Internet and Beyond:</strong></h3>
<p>The internet is the largest and most famous WAN, connecting billions of devices worldwide. Beyond the internet, networks play a crucial role in various fields, such as healthcare, finance, transportation, and more. From automated vehicles communicating with each other to medical devices transmitting patient data to doctors, networks shape the future of technology.</p>
<p>In conclusion, computer networks are the backbone of our interconnected world, enabling seamless communication and resource-sharing. Understanding the basics of networks empowers us to appreciate the magic behind the internet and the technology that drives our daily lives.</p>
<p>So the next time you send a message, stream a movie, or share a photo with a friend, remember the incredible network that makes it all possible! Happy networking!</p>
]]></content:encoded></item><item><title><![CDATA[Create your first pull request]]></title><description><![CDATA[This is the second part of my two-part Getting Started with Git and GitHub blog series. In this blog, we will be focusing on working with existing projects on GitHub. There are many projects out there on GitHub to which you can contribute and get sta...]]></description><link>https://blog.sushant.dev/create-your-first-pull-request</link><guid isPermaLink="true">https://blog.sushant.dev/create-your-first-pull-request</guid><category><![CDATA[Linux]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Wed, 26 Jul 2023 13:02:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690358494749/df3e0ba2-c38e-499f-965b-122fd2cc499c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the second part of my two-part <a target="_blank" href="https://hashnode.com/post/clkgvzoz9000109mi0dv69es1">Getting Started with Git and GitHub</a> blog series. In this blog, we will be focusing on working with existing projects on GitHub. There are many projects out there on GitHub to which you can contribute and get started with open source. To learn this, we first need to understand the concept of branching in Git and why we use branching</p>
<h2 id="heading-what-is-branching">What is branching?</h2>
<p>Branching in Git and GitHub allows developers to create separate lines of development, enabling them to work on different features or fixes without affecting the main codebase. Think of it as creating a <strong>copy</strong> of the project to experiment and develop independently, while still having the option to merge changes back into the main codebase later.</p>
<h2 id="heading-why-branching">Why branching?</h2>
<p><strong>Real-life example</strong>: Imagine you are working on a website with your team. The website is functional, but you want to add new features like user authentication and a blog section. To avoid directly making changes to the live website (main codebase), you create a separate branch for each feature. One branch is dedicated to implementing user authentication, and another branch focuses on the blog section.</p>
<p>Now, team members can work simultaneously on different features without interfering with each other's code. Once a feature is complete, it can be tested independently on its branch. Once everything is thoroughly tested and reviewed, you can merge the branches back into the main codebase, making the new features available on the live website. This way, branching in Git and GitHub enables seamless collaboration and organized development on projects.</p>
<h2 id="heading-branching-in-git">Branching in git</h2>
<p>In the last blog, we uploaded our first project to GitHub. In that project, we will now create some branches. To create a branch, we use the <code>git branch &lt;branch-name&gt;</code> command. So, go to your project and type this command.</p>
<p><code>git branch football</code></p>
<p>Now that you have created a branch named 'football,' you can check whether your branch is successfully created or not using the command <code>git branch -a</code> which lists down all the branches of the remote URL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690361501341/7346c0c8-a53f-492b-b43b-dd97a62f0b47.png" alt class="image--center mx-auto" /></p>
<p>In the above image, you can see all the branches: the <code>football</code> branch that you created and the <code>master</code> branch, which is the default branch. You can view that there is an (*) asterisk in front of the <code>master</code> branch, which means that our head is currently on the <code>master</code> branch. Whatever changes we made to that file are on the 'master' branch. To switch to the <code>football</code> branch, we use the command called <code>git checkout &lt;branch-name&gt;</code>.</p>
<p>Now, in the terminal, type <code>git checkout football</code> and then use <code>git branch -a</code> to list the branches.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690362103611/9b88c4cd-8fa5-4a55-ac1f-86c1c38e9d08.png" alt class="image--center mx-auto" /></p>
<p>Now you can see that the (*) asterisk is now on the <code>football</code> branch, which means that any changes you make will be on the <code>football</code> branch. So, let's make some changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690362730371/53eb5250-9ca5-4561-9b92-edf0d5a3f65e.png" alt class="image--center mx-auto" /></p>
<p>I added one text file and committed that file. Now, let's push that file to GitHub. The command is <code>git push origin &lt;branch-name&gt;</code>. Since we want to push to the <code>football</code> branch, we have to type <code>git push origin football</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690363059075/5462fe43-1288-458c-b9c5-b045de594bf3.png" alt class="image--center mx-auto" /></p>
<p>Now when you go to your GitHub repo to go to your repo, just type <code>git remote -v</code>, and it will show you all the URLs attached to your project.</p>
<p>You will get links; go to the link, and when you go there, you will see that there isn't any football.txt file added because you are in the master branch. Just change the branch to <code>football</code> and you will be shown all the changes you made.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690363743271/c8be4693-f0b4-48e1-95e7-9721f9c02958.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690363754321/ecd11ce6-253a-41c1-8215-3ee8c426a5cc.png" alt class="image--center mx-auto" /></p>
<p>Now, to merge it with our main branch, you have two ways to do it. You can see there is one green button popping up that says <code>Compare &amp; pull request</code>. You can use that, but for now, let's do it with the command line. Go to the terminal and checkout to the branch you want to merge; in most cases, it is the main/master branch. Then, type <code>git merge &lt;branch-name&gt;</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690364170856/4f62ce8b-de45-44a1-8810-6a19ad651c99.png" alt class="image--center mx-auto" /></p>
<p>To push the changes, use <code>git push origin master</code>, and now you have merged both branches. Now, when you go to GitHub and see that the master branch also has the football.txt file, both branches are merged.</p>
<p>Now let's delete that branch. To delete a branch, we use the command <code>git branch --delete &lt;branchname&gt;</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690364694720/0e1157fe-8132-4492-9e4e-c375771dd2d9.png" alt class="image--center mx-auto" /></p>
<p>It's still available on your GitHub. To delete it from GitHub, type <code>git push origin --delete &lt;branch-Name&gt;</code>.</p>
<p>In our case :</p>
<p><code>git push delete origin --delete football</code></p>
<h2 id="heading-make-your-first-pull-request-on-github"><strong>Make your first pull request on GitHub</strong></h2>
<p>Let's assume there is a repository you love or a project you want to contribute to. To do so, you want to make a copy of that project on your system. In order to achieve this, you need to first fork the project. So, what is forking?</p>
<h3 id="heading-what-is-forking">What is Forking?</h3>
<p>Forking in Git and GitHub is the process of creating a personal copy of someone else's repository on GitHub, allowing you to freely experiment and make changes without affecting the original project. It is a fundamental mechanism for open-source collaboration, enabling contributors to propose changes and improvements to projects they are interested in.</p>
<p>Let's learn this with an example:<br />There is one project on vaishali86c's account named nice-project that you want to fork.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690371139708/1d70cdae-1688-459f-a64e-d6561ee1f5a8.png" alt class="image--center mx-auto" /></p>
<p>To fork this project, you have to click on the <strong>Fork</strong> option.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690371229959/04a9f52e-42c3-49cb-a211-8e74560082e1.png" alt class="image--center mx-auto" /></p>
<p>After forking this to your account, this is a copy of the original project. Now you can do anything with this project. First, we need to make a copy of it to our system. To do so, clone the repository. To clone, copy the link of the forked one. To get the link, click on the green colored <strong>Code</strong> icon. You will get the HTTP link. Copy the link.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690371643314/d96287bd-1743-42de-90bb-17b087fd0ba0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-what-is-cloning">What is Cloning?</h3>
<p>Cloning in Git is a process that allows you to make an exact copy of a repository from a remote server (like GitHub) to your local machine. In simple terms, it's like downloading a project from the internet to your computer, so you can work on it locally.</p>
<p>When you clone a repository, you get the entire version history, all the files, and the entire project's structure. This means you have access to all the code and commit history that exists on the remote repository.</p>
<p>To clone the repository, go to the terminal and type the command <code>git clone &lt;URL&gt;</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690372003406/e061b173-223e-4b7a-87bd-cbf9544ae29b.png" alt class="image--center mx-auto" /></p>
<p>Now that the repository is cloned, let's make some changes to the file. A good practice is to create branches to work on the project, so let's create a branch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690372433548/0d7a8e51-01ef-41d3-a8ff-72187dfae6cf.png" alt class="image--center mx-auto" /></p>
<p>I created a branch named html and added a file to that branch. Now, let's push those changes to our forked repository. Use the command <code>git push origin html</code> to do so.</p>
<p>After doing so, go to your repository on GitHub. You will see an option for <strong>Compare &amp; Pull Request</strong> Now, what is a pull request?</p>
<h3 id="heading-what-is-pull-request">What is pull request?</h3>
<p>Pull requests are the way we contribute to group projects or open source projects.</p>
<p>For instance, a user xyz forks a repository of abc and makes changes to that repository. Now Harry can make a pull request to abc, but it’s up to abc to accept or decline it. It’s like saying, “abc, would you please pull my changes?”</p>
<p>Let's make our pull request :</p>
<ol>
<li><p>Go to your repo and click on the option <strong>Compare &amp; Pull Request</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690373266247/ac8f425c-84b4-4997-99c2-1c8ad358fed8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>After that, you will have the option to create a pull request. You can explain your pull request in the comments</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690373922789/de864ade-babb-4fa8-9221-9fff5c3eaff8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>After that, click on <strong>Create Pull Request</strong> Congratulations, you have created your first pull request!</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690374177569/ecf48ca5-3928-4b8b-b1a0-6b2acb736b7e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now the project owner gets the notification of your pull request, and he/she can merge it.</p>
</li>
</ol>
<p>Now, imagine you have cloned a repository, and after that, some changes were made in the original project. Let's see how we can sync our forked project.</p>
<h3 id="heading-sync-your-forked-main-branch"><strong>Sync your forked main branch</strong></h3>
<p>You can do this in two ways. The first one is simple: just go to your GitHub repo, which you forked, and click the <strong>Sync fork</strong> option.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690375299543/f3e19490-8f50-481b-a1eb-9b7b1e0c2308.png" alt class="image--center mx-auto" /></p>
<p>Secondly, to do this through the terminal, we first have to add the upstream URL. The term "upstream URL" refers to the remote repository from which you forked your project. When you fork a repository, you create a copy of the original repository under your GitHub account. The upstream URL is the link to the original repository from which you made the fork.</p>
<p>The command to add an upstream URL is <code>git remote add upstream &lt;URL&gt;</code></p>
<p>To check whether the URL is added or not, type <code>git remote -v</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690375920963/16431bfb-a351-4cb7-90d6-18dceb399d6b.png" alt class="image--center mx-auto" /></p>
<p>To fetch the changes, type <code>git fetch upstream</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690376065341/70e2f5ea-ab6c-44f1-bfe0-3ed827e6c176.png" alt class="image--center mx-auto" /></p>
<p>Congratulations! If you've come this far, you've gained a solid understanding to kickstart your journey with Git and GitHub. Thank you for taking the time to read this guide. If you come across any corrections or have any doubts, feel free to leave a comment or message me on Twitter. You can find the link to my Twitter account in my profile.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started With Git And GitHub]]></title><description><![CDATA[What is Git And GitHub ?
Imagine you are working on a school project, like writing an essay. As you progress, you make changes to your essay regularly. But sometimes, you might want to go back to an earlier version of the essay, just in case you want...]]></description><link>https://blog.sushant.dev/getting-started-with-git-and-github</link><guid isPermaLink="true">https://blog.sushant.dev/getting-started-with-git-and-github</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Sushant Pathare]]></dc:creator><pubDate>Mon, 24 Jul 2023 13:11:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690016252450/20071d5a-44b4-4447-bf6f-24055557cf63.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-git-and-github">What is Git And GitHub ?</h3>
<p>Imagine you are working on a school project, like writing an essay. As you progress, you make changes to your essay regularly. But sometimes, you might want to go back to an earlier version of the essay, just in case you want to undo some recent changes or see what it looked like before.</p>
<p>Git is like having a magical time machine for your essay. It keeps track of every change you make, so you can easily go back in time and see previous versions. This way, you can experiment and make changes with confidence, knowing you can always go back to a previous point if needed.</p>
<p>Now, let's talk about GitHub. Think of GitHub as a special folder in the cloud where you can store your essay and its time-traveling history. It's like having a backup of your essay online. Not only that, but GitHub also allows you to share your essay with others, like your classmates or teachers, and they can even suggest changes or work on the essay together with you.</p>
<p>In summary:</p>
<ul>
<li><p>Git is the magical time machine that keeps track of changes in your project, so you can easily go back to previous versions.</p>
</li>
<li><p>GitHub is like a cloud folder that stores your project and its history. It also lets you share your project with others and work together as a team.</p>
</li>
</ul>
<h3 id="heading-why-we-are-using-git-and-github">Why we are using Git and GitHub ?</h3>
<ol>
<li><p><strong>Keep Track of Changes:</strong> Git allows us to keep a record of all the changes we make to our projects. It's like a time machine that lets us go back to previous versions if something goes wrong or if we want to see how our project evolved over time.</p>
</li>
<li><p><strong>Undo Mistakes:</strong> With Git, if we make a mistake or accidentally delete something important, we can easily revert back to a previous working version. It's like having a safety net for our work.</p>
</li>
<li><p><strong>Collaborate with Others:</strong> GitHub is like a sharing platform for projects. We can upload our projects there and share them with friends or teammates. They can see our work, suggest changes, or even work on the project together with us. It's a fantastic way to work as a team without sending files back and forth through email.</p>
</li>
<li><p><strong>Backup in the Cloud:</strong> GitHub stores our projects safely on the internet. This means even if something happens to our computer or if we lose our files, the project is still safe and accessible on GitHub.</p>
</li>
<li><p><strong>Open Source Community:</strong> GitHub is home to many open-source projects. These are projects where developers from all around the world work together to create amazing software that is free for everyone to use. By using GitHub, we can join this vibrant community, learn from others, and contribute to exciting projects.</p>
</li>
</ol>
<h3 id="heading-downloading-git">Downloading Git</h3>
<p>Go to <a target="_blank" href="https://git-scm.com/"><strong>https://git-scm.com/</strong></a> and download the version suitable for your operating system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690017472741/31862d8e-d668-475d-8a48-6c046aa110f9.png" alt class="image--center mx-auto" /></p>
<p>To check whether Git is installed on your system or not, open the terminal and type the <code>git</code> command. If Git is installed, it will show you a list of available Git commands, which confirms that Git is installed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690017753540/065161f4-6079-4628-a8d9-ed6777554968.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-some-basic-linux-commands">Some Basic Linux Commands</h3>
<p>Follow along to get a grasp of the topic</p>
<p>Open the terminal and type the <code>ls</code> command. You may see a list of files and folders. First, let's understand what a terminal actually is. A terminal is a command line interface from which we can manipulate the file structure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690018803733/b3544119-c4cf-4b45-9e7a-7593d0f058ad.png" alt class="image--center mx-auto" /></p>
<p>Then type <code>cd Desktop</code> in the terminal, and after that, type <code>ls</code> now, you will see a list of all the files and folders that are present on the desktop.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690019201122/24647500-b072-49b1-8503-eb836fcefe1b.png" alt class="image--center mx-auto" /></p>
<p>So basically, this command stands for:</p>
<ul>
<li><p><code>ls</code>: Lists all the files and directories of the current directory.</p>
</li>
<li><p><code>cd</code>: Stands for change directory.</p>
</li>
</ul>
<p>Now, type <code>mkdir project</code> in the terminal, and you will notice that a new folder/directory named 'project' is created on the desktop.</p>
<ul>
<li><code>mkdir</code>: stands for 'make directory,' which means it creates a new folder.</li>
</ul>
<p>To go inside the project directory, type <code>cd project</code> , and then use <code>ls</code> to list the folders/files inside it. At this point, the 'project' directory will be empty.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690021579884/f5084594-4077-46e2-a841-2f1fc0288f41.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-initializing-a-git-repository">Initializing a Git Repository</h3>
<p>Now, as we learned above, Git stores the history of our project, including all the files we modified and deleted. This history is stored in another file known as the <code>.git</code> directory, which Git provides when we initialize a repository using the <code>git init</code> command.</p>
<ul>
<li><code>git init</code> - Initializes the Git repository.</li>
</ul>
<p>To initialize the ".git" directory, type the <code>git init</code> command inside the project directory.</p>
<p>But when you type or look at the project file, there is no file named .git. Basically, in Linux and macOS, all files that start with a period (.) are hidden. To see such files, we have to use the option with the <code>ls</code> command, <code>ls -a</code>.</p>
<blockquote>
<p>Note: In the <code>ls -a</code> command, the option 'a' stands for 'all,' which means it lists all hidden files and folders.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690022808208/5d449287-1c02-48dc-a7cc-9592ea5266a9.png" alt class="image--center mx-auto" /></p>
<p>Now, imagine a scenario of a wedding photoshoot. There is one couple on stage, and relatives want to take pictures with the couple. Just like that wedding scenario, we also need to take photos or snapshots of our project to store them in history.</p>
<p>Let's imagine it together. First, we have to make a change in the project folder. So, go to the project folder and create a file named 'cricket.' To create a file, we use a command called <code>touch</code> In the command line, type <code>touch cricket.txt</code>.</p>
<ul>
<li><code>touch</code>: Used to create a file.</li>
</ul>
<p>Now, we have created a file, and to store it in the history of <code>.git</code>, we imagine it with our scenario. But before that, to check what changes have been made and whether they are added or not, we have a command called <code>git status</code>.</p>
<ul>
<li><code>git status</code>: gives the current status of the repository.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690024404849/8baa032b-f197-4ab8-98c8-8d0de8cc4b68.png" alt class="image--center mx-auto" /></p>
<p>We can see that it gives us untracked files with filenames in red color. Comparing it with our wedding scenario, these untracked files are like the relatives who haven't taken photos with the couple yet. To take photos of them with the couple, the relatives have to go to the stage. Similarly, to store snapshots of our project's changes in history, we need to put our project on the stage. The command to add a specific file is <code>git add &lt;file name&gt;</code> , and to add all files, we use <code>git add .</code>.</p>
<p>Now, when you type <code>git add .</code> and then, when you type <code>git status,</code> you'll see that the file is now in green color, which means that its photo has been taken.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690025138733/1282922e-0d4d-43d5-a94e-ea3b2719400f.png" alt class="image--center mx-auto" /></p>
<p>Now that the photo has been taken, we can save the photo in our photo album. Similarly, we use the <code>git commit</code> command to store a snapshot of our project in its history.</p>
<blockquote>
<p>Syntax: <code>git commit -m "message"</code> (where <code>-m</code> stands for "message").</p>
</blockquote>
<p>As we added a new file, type <code>git commit -m "cricket.txt added"</code>.</p>
<p>Now, after that, when you type <code>git status</code> again, it will check whether there are any other changes made in the project. If not, it will show the working tree clean.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690183679825/4e31abeb-1810-46e9-a844-0c0a5437b31a.png" alt class="image--center mx-auto" /></p>
<p>Now, let's make a new change. Open the cricket.txt file using another command known as Vim. Type <code>vim cricket.txt</code> to open it. Now, as you've opened the file, type some random stuff inside it. To save, type <code>ESC</code> followed by <code>Shift</code>, and <code>:</code> together, and then press <code>x</code>. After saving, guess what happens when you type <code>git status</code> again. So, what have we done? We added some text, right? Therefore, it shows that cricket.txt was modified.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690193949309/5553185b-5500-42e8-8a5f-10276b9c895f.png" alt class="image--center mx-auto" /></p>
<p>To add a snapshot of this photo, we need to get them on stage again. As we already know, we have to type 'git add .' to do so. Now, imagine we accidentally get them on stage, and we don't want to take a picture of them. In order to unstage them, we use <code>git reset cricket.txt</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690196253375/6dac000f-5cd0-459c-9689-e63558c51a4a.png" alt class="image--center mx-auto" /></p>
<p>Now, let's take a snapshot of the project and add it to the history. Now, a question has arisen: where is the history of our project stored? To check the history of the project, we have the 'git log' command, which logs all the commits we made to that file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690196910247/46975e9e-31b2-4bb3-a139-83e3fa41b8b7.png" alt class="image--center mx-auto" /></p>
<p>Here, we see that there are two commits which we made earlier: one is for adding the file, and the second one is for modifying the file. Now, let's add a few more commits. Let's add one more file and make some changes to it, then commit those changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690197390157/aa1cf595-41d5-43d0-bd7a-7f55bfd9f04f.png" alt class="image--center mx-auto" /></p>
<p>I added a file named youtuber.txt, added some text inside it, then deleted that file, and committed those changes.</p>
<p>Now, let's assume you accidentally deleted that file, and you want to restore that change. In short, we want to go back to the state of our project two commits earlier. To do so, we'll use the 'reset' command. First, type <code>git log</code> to see the commit history.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690197857329/6b4ddce2-3174-45cc-9ad0-92cae9dc2754.png" alt class="image--center mx-auto" /></p>
<p>We want to see how our project looked like on <code>Mon Jul 24, 2023, at 16:35:18 +0530.</code> To do so, we just have to copy the commit ID and type <code>git reset &lt;commit ID&gt;</code>. Then, you will see our project changes to how it looked 2 commits earlier. Now, when you type <code>git log</code>, you will see that 2 commits have gone, and there are only 2 commits remaining.</p>
<p>Our terminal looks too messy. To clear the terminal, type the 'clear' command.</p>
<h3 id="heading-stashing-changes">Stashing changes</h3>
<p>Now, let's imagine you've made some changes, but you don't want to commit them yet. You want to go back to an earlier commit without losing those changes, so you can work with a clean code. Whenever you need, you can retrieve those changes. For this purpose, there is one command called <code>git stash.</code></p>
<p>In our example, 'git stash' can be likened to taking a photo of the changes you have staged, and then removing them backstage (unstaging). Whenever you want, you can bring them back on stage (staging) and take the photo (commit) later.</p>
<p>To get them back, the command is <code>git stash pop</code>.</p>
<p>To delete it, use <code>git stash clear</code>.</p>
<p>Let's practice it. Create some files and commit them. To use the stash command, we first have to add the changes with <code>git add .</code>. After that, use 'git stash'. Now, you'll see that the files are gone. When you type <code>git status</code> again, it shows the working tree clean.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690200232186/189f39b9-9437-45ed-a51e-ed2081bfccd9.png" alt class="image--center mx-auto" /></p>
<p>And now, type <code>git stash</code> and then see that the files have gone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690200320878/fa54b9b8-5618-4c54-8a82-68e833e6db12.png" alt class="image--center mx-auto" /></p>
<p>To get it back, type <code>git stash pop</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690200465659/108336e3-aa5e-4c36-b221-a6d664ea43b5.png" alt class="image--center mx-auto" /></p>
<p>And to delete them, first, add them to the stash, and then type <code>git stash clear</code> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690200591779/864200c5-a4eb-454f-b88a-920ded91f30f.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Note: When you clear the stash, it is deleted permanently.</p>
</blockquote>
<h3 id="heading-creating-a-repository-on-github">Creating a repository on GitHub</h3>
<p>Now, we've learned much about Git. Let's move on to GitHub and upload our project there. To do so, first, we need to create our account on <a target="_blank" href="http://www.github.com">GitHub</a>. It's simple; just go to the official GitHub website and create your account. After that, let's create our own repository:</p>
<ol>
<li><p>Click on the profile icon in the top right corner.</p>
</li>
<li><p>Click on the 'Your repositories' section.</p>
</li>
<li><p>In green color, there will be an option. Just click on it to create a repository with whatever name you want.</p>
</li>
</ol>
<p>Now we have created our repository. Let's connect it to our project by copying the link of your project and typing the command <code>git remote add origin &lt;URL&gt;</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690201808568/9694593b-19fe-4872-84ae-bb5da87cd12c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690201838073/16a1181c-7022-4b45-8ff3-5609509615d6.png" alt class="image--center mx-auto" /></p>
<p><em>The</em> <code>git remote add origin</code> <em>command is like telling Git where your project's online home is. It connects your local project to a remote location, such as a repository on a website like GitHub. By adding the</em> <code>"origin"</code> <em>remote, you can easily push your local changes to the remote repository and pull the changes made by others back into your local project. It's a way to link your local work to a central place where you and your collaborators can store and share the project's code</em></p>
<p>Now, go to your project and type the command. After entering this command, your local folder is linked to the online repository on GitHub. To view how many links are attached, there is one command called <code>git remote -v</code> , which lists all the URLs attached to the folder.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690202382444/50c7b756-01c6-4685-aff4-85aa8b2cdcba.png" alt class="image--center mx-auto" /></p>
<p>Now, we see that our URL is attached. However, when you check GitHub, you'll notice that our project is still empty. To upload our project to GitHub, use the command <code>git push origin main/master</code>.</p>
<p><code>git push origin main/master</code> <em>is a command used in Git to send your local code changes to a remote repository on GitHub. The "origin" refers to the remote repository's name, and "main" represents the branch where you want to push your changes. By using this command, you make your local code available and visible on the GitHub repository.</em></p>
<p>Now, when you refresh your GitHub, you will see that your project is now uploaded."</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690202807892/673cb030-804a-4933-a984-23f36a69b71f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690202839982/c94fe4e9-356b-4b7c-a298-a791f7fdc7d6.png" alt class="image--center mx-auto" /></p>
<p>Congratulations! You just created your first repository on GitHub. There are so many concepts still to be covered, like cloning repositories and branches, which I will be covering in my next blog. To get notified, follow me on my socials. In case of any doubts, you can ask me there. Thanks for reading my blog, and if you like it, please leave a comment. If you don't, then please give me suggestions in the comment section.</p>
]]></content:encoded></item></channel></rss>