https://i.imgur.com/T0fcnXo.png

Delete stale git branches

Deleting stale git branches Working on monolithic projects with a large team of developers often leads to an accumulation of stale git branches. These branches may be left behind due to unresolved merge conflicts, abandoned tasks, or even employee turnover. Over time, this can result in repositories with thousands of inactive branches, potentially affecting your CI/CD process and increasing the time taken for git clone operations. To address this, we can implement a process to identify and delete stale branches that have been inactive for a certain period, say 120 days.

Day 29: Docker - Running and managing containers

Content Part 1: Introduction to DevOps Day 1: Understanding DevOps, its principles, and benefits Day 2: Exploring the DevOps lifecycle and its stages Day 3: Introduction to Continuous Integration (CI) and Continuous Deployment (CD) Day 4: Familiarizing with common DevOps tools and technologies Day 5: Studying DevOps culture and best practices Part 2: Version Control Systems Day 6: Introduction to Git Day 7: Basic Git commands (git init, git add, git commit, git status) Day 8: Branching and merging in Git Day 9: Remote repositories and collaboration with Git Day 10: Git workflows and best practices Part 3: Continuous Integration and Continuous Deployment (CI/CD)

Day 28: Docker - Building and managing images

Content Part 1: Introduction to DevOps Day 1: Understanding DevOps, its principles, and benefits Day 2: Exploring the DevOps lifecycle and its stages Day 3: Introduction to Continuous Integration (CI) and Continuous Deployment (CD) Day 4: Familiarizing with common DevOps tools and technologies Day 5: Studying DevOps culture and best practices Part 2: Version Control Systems Day 6: Introduction to Git Day 7: Basic Git commands (git init, git add, git commit, git status) Day 8: Branching and merging in Git Day 9: Remote repositories and collaboration with Git Day 10: Git workflows and best practices Part 3: Continuous Integration and Continuous Deployment (CI/CD)

Day 27: Docker - Installation and configuration

Content Part 1: Introduction to DevOps Day 1: Understanding DevOps, its principles, and benefits Day 2: Exploring the DevOps lifecycle and its stages Day 3: Introduction to Continuous Integration (CI) and Continuous Deployment (CD) Day 4: Familiarizing with common DevOps tools and technologies Day 5: Studying DevOps culture and best practices Part 2: Version Control Systems Day 6: Introduction to Git Day 7: Basic Git commands (git init, git add, git commit, git status) Day 8: Branching and merging in Git Day 9: Remote repositories and collaboration with Git Day 10: Git workflows and best practices Part 3: Continuous Integration and Continuous Deployment (CI/CD)

Day 26: Introduction to containerization

Content Part 1: Introduction to DevOps Day 1: Understanding DevOps, its principles, and benefits Day 2: Exploring the DevOps lifecycle and its stages Day 3: Introduction to Continuous Integration (CI) and Continuous Deployment (CD) Day 4: Familiarizing with common DevOps tools and technologies Day 5: Studying DevOps culture and best practices Part 2: Version Control Systems Day 6: Introduction to Git Day 7: Basic Git commands (git init, git add, git commit, git status) Day 8: Branching and merging in Git Day 9: Remote repositories and collaboration with Git Day 10: Git workflows and best practices Part 3: Continuous Integration and Continuous Deployment (CI/CD)

Serverless Workflow to Process Files Uploaded to Amazon S3

This is going to be a walkthrough of one of the LAB I did in Udemy to build Serverless Workflow in AWS. The task of the LAB was to take any JSON file uploaded in S3 bucket and store the data from that in DynamoDB. Source code for diagram from diagrams import Cluster, Diagram from diagrams.aws.compute import Lambda from diagrams.aws.database import Dynamodb from diagrams.aws.integration import SQS from diagrams.aws.storage import S3 with Diagram("Serverless Workflow", graph_attr={"margin": "-1"}, show=False): with Cluster("Amazon S3"): s3_bucket = S3("JSONFilesBucket") with Cluster("AWS SQS"): dlq = SQS("DeadLetterQueue") json_processing_queue = SQS("JSONProcessingQueue") json_processing_queue - dlq with Cluster("AWS Lambda"): lambda_function = Lambda("ProcessJSONFiles") with Cluster("Amazon DynamoDB"): dynamodb_table = Dynamodb("JSONItemTable") s3_bucket >> json_processing_queue >> lambda_function >> dynamodb_table Thing it covers: