Tuesday, March 14, 2017

AWS CodeBuild - HowTo

Amazon Web Services' CodeBuild is a managed service that allows developers to build projects from source.

Typically CodeBuild is used as part of your CI/CD pipeline, perhaps along with other AWS tools like CodeCommit, CodePipeline and CodeDeploy.

This blog will explore the use of CodeBuild to build the Bedrock project and update a yum repository.  Along the way I'll detail some of the things I've learned and the path I took to automating the Bedrock build.

Sunday, March 12, 2017

AWS CodeBuild and why I have no hair left

I'll blog soon about AWS CodeBuild and CodePipeline when I get around to documenting all of my frustrations but for now,  to save some poor souls from losing their hair, here are some quick tips:


  1. CodeBuild is not very helpful regarding malformed YAML in your buildspec.yml file.  If things don't work, check to make sure your buildspec.yml file is well formed.
  2. As if that were not bad enough, even if it is well formed, if you include elements it does not recognize it might just skip them silently.  I inadvertently used pre-build instead of pre_build and lost some hair on that one.
  3. As verified in this blog post, CodeBuild will not upload artifacts to the root of a bucket - it really wants a folder name.  Odd really, since S3 objects have key names and folders do not really exist.  I was trying to create a yum repository in a bucket that is hosting a website and wanted my files to in the root of the bucket.  No can do pal.
  4. If you want to sync some files to said website bucket and are making the site publicly available by setting permissions as I was using the CLI, you'll need to make sure that your CodeBuild policy that is attached to the role you use to run CodeBuilder has the proper permissions to your S3 bucket.  In this case you'll need ListObject, PutObject, GetObject, and PutObjectAcl.   Here's what the policy might look like:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:logs:us-east-1:*********:log-group:/aws/codebuild/bedrock-build",
                "arn:aws:logs:us-east-1:*********:log-group:/aws/codebuild/bedrock-build:*"
            ],
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ]
        },
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::codepipeline-us-east-1-*"
            ],
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:GetObjectVersion"
            ]
        },
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::openbedrock",
                "arn:aws:s3:::openbedrock/*",
                "arn:aws:s3:::repo.openbedrock.net",
                "arn:aws:s3:::repo.openbedrock.net/*"
            ],
            "Action": [
                "s3:Put*",
                "s3:Get*",
                "s3:List*"
            ]
        }
    ]
}

More about my adventures with CodeBuild and CodePipeline later...

Friday, December 30, 2016

Follow-up: Using AWS Simple Email Service (SES) for Inbound Mail


As I alluded to in my first post on this subject, it's possible to use AWS Lambda to process mail you receive as well.   In this final post on AWS Simple Email Service, I'll show you how to forward your email using a Lambda function.

Thursday, December 29, 2016

Part 2: Using AWS Simple Email Service (SES) for Inbound Mail


Delete, delete, delete, delete, forward...

In part one of my two part blog on Amazon's Simple Email Service, we set up the necessary resources to receive and process inbound email. In part two, we'll  create a worker that reads an SQS queue and forwards the mail to another email address.

Sunday, December 18, 2016

Using AWS Simple Email Service (SES) for Inbound Mail

Oy! Good thing no one else can see THIS message!
We all know what a pain in the rump it is to setup, manage, and secure an inbound mail server.  It's a thankless job that is increasingly the point of attack for bad guys.  It's also possible that if you screw it up you might find yourself in front of Congress!

In our architectures, now more than ever, it is important to reduce the surface area for attacks. That means closing down as many access points to your network as possible.  SMTP running on port 25 is a gaping hole that most architects interested in securing their networks want turned off, like yesterday!

If you don't want to completely outsource your inbound mail to a managed service, AWS SES inbound email service is one way to have your cake and eat it too.   It's especially useful if you want to allow your application to receive mail but you don't necessarily want or need to host an email service that includes an IMAP or POP server.  You may only need to receive mail in which case AWS SES is the perfect solution.  Along with a scalable managed service, SES also includes spam filtering capabilities.

In this two part blog, we'll explore setting up a simple inbound mail handler for openbedrock.net using Amazon Web Services Simple Email Service (SES).

Saturday, December 17, 2016

Using AWS S3 as a Yum Repository

In an earlier post I described how you can use an S3 bucket to host a yum repository.  In this post we'll give the repository a friendlier name and create an index page that's a tad more helpful.  If you're not using AWS S3 and CloudFront to host your static assets, you might want to consider looking into this simple to use solution for running a website without having to manage a webserver.

Visit http://repo.openbedrock.net to see the final product.

Monday, December 5, 2016

AWS re:Invent - Wrap Up

It's Friday, I'm blogging at 33,000 feet and the captain has just turned off the seat belt signs, informed us that it's 50 degrees back at our final destination, Philadelphia, Pennsylvania and directed us to just sit back and relax.  I get very nervous when the captain of a plane flying at 33,000 feet tells me my final destination is Philadelphia.  I'm hoping to do a few more trips to AWS re:Invent before my "final destination" arrives.

CloudBlogging (pun intended) about my 5 days at one of the most important cloud computing events each year thanks to my Acer Chromebook and my free GoGo Inflight passes.  I still have 6 left of the 12 I received when I purchased the C720 nearly three years ago prior to attending my second AWS re:Invent extravaganza. Wifi at 33,000 feet is spotty but it is possible to complete a blog post on a 5 hour trip across the country.

No blog here would be complete without a shameless plug (I promise this is the only plug in the blog) for Chromebooks - #LoveMeSomeChromebook.

If you haven't read the rest of this somewhat tantalizing series on AWS re:Invent start here.  I'll wrap up the blog series with a recap of the week's highlights, opine a bit and provide you with some of the key takeaways.  Enjoy!