Beginnings in Golang and AWS – Part VI – Events, Lambda and Transcribe (cont’d)


In today’s post, we’re going to be looking at the code within the handler’s function. As part of this, we’ll be covering using structs and JSON together, logging to CloudWatch, marshaling, processing of S3 event data, and how to start a Transcribe job. It’s a bit of a longer post today as we’ll go through the entire code in the function.

The Story So Far…

At this point, our handler has been triggered by a file being placed in our S3 bucket to which there is an event subscription for CreateObject (more about this in the next blog). We’ve received the event information, which is placed in our variable S3Event, a struct. We have the information we need for further processing of S3Event and can proceed immediately with it. However, it’s worthwhile spending a couple of minutes looking at how Go processes the information received to place it into the variable.

A Bit About JSON & Go

Go does not have a native parsing mechanism for JSON data that allows dynamic generation of a struct based on the content (think PowerShell’s ConvertFrom-JSON cmdlet for example). Instead, (unless you want to go into the murky world of reflection and creating maps) you are expected to have some degree of awareness of the schema of data being received. Go still handles the conversion process, but it looks to you for information on how to map content. This is done in struct definitions simply by indicating the location of the data to map to, of the format json:"location".

In our situation, the definition of an S3Event for example (an array of type S3EventRecord), maps to the “Records” section of the event data (see below). When information is nested (i.e. a subsection of another), we just make sure our struct matches this. An example of this is the EventVersion string, contained in the S3EventRecord struct, which is mapped to “eventVersion” in the JSON file.

An extract of the struct configuration and JSON data is below. You can examine the complete definition of an S3Event from the aws-lambda-go sdk, here:

Here’s the first two levels of our struct…

…and an extract from the S3Data JSON

Notice how they match up. The parser uses this information to populate the struct properties.

One aspect that is quite nice about the mapping process is that providing the top level outline structure matches the data being received, the entire substructure does not need to be present. You have complete control over which properties should be mandatory or not.

Naturally, it makes sense to have your struct defined to represent the “full” content schema of the JSON data, but content being received into this struct does not need to be as complete. This can help when JSON content may contain less fields, yet still come from the same event source.

If you’d like to spend some more time looking how a Go struct can be designed from a JSON source, I’d recommend taking a look at:

Code Description

Let’s move onto the code itself now. Here’s our entire function as a reminder, before we burrow down into what it’s doing.

As mentioned previously, we’re going to use Cloudwatch for logging, using the aptly named log package. The first logging we’ll do is that of the S3Event data.

Initially, we need to marshal the data. Marshaling takes an interface (our struct in this case) and returns a JSON encoding of it.

With this done, we use the string function, which in turn converts our byte data to a string.

We then log this information to CloudWatch. A nice touch of CloudWatch is that it picks up that the string data is JSON and formats it nicely for us. You’ll see this firsthand in the final part of this series.

We then need to iterate through each record entry in the event data. We use for to do this, assigning the record variable on each interaction from s3Event.Records. We do not need to set an initial value in the declaration, hence the “_”.

Inside the loop, we set s3 to the value of the s3 branch, and from this we log the key referred to in the event. We will use this later as a parameter for our Transcribe job.

Any time we want to perform operations with another AWS service, a session needs to be created. We define the parameters that the session will use (a region of eu-west-1, and using my own development profile), then create it using transcribeservice.New, a function from the package.

Next, a check is made to ensure that a successful session was established. This is easily verified by ensuring that a non nil value was returned to our variable, transcriber. If a nil value was returned, we exit the function. We log the result irrespective to CloudWatch.

Now we want to get our parameters set before starting a transcription job.

  • A random job name needs be created, so the GUID function we created earlier is used to populate the variable jobname.
  • We set mediafileuri, using string expansion with the the bucket name and key name that we got from the S3EventData
  • Mediaformat is set to mp4
  • Lastly, we set a language code of en-US for the languagecode variable.


We define StrucMedia, which is of type transcribeservice.Media. One thing to be mentioned here is that we pass in a pointer to mediafileuri, not a string. This is because the MediaFileUri definition in the transcribeservice.Media struct specifies via *string that it expects to receive a pointer.

As such, our definition is as below.


Then, we invoke the StartTranscriptionJob function. This takes as its parameter a pointer to a StartTranscriptionJobInput struct, whose properties we set within it.  Lastly, a completion message is logged to CloudWatch.


In this post, we’ve covered the code within our lambda function and in doing so have covered how structs and json interoperate, logging to CloudWatch, marshaling, processing of S3 event data and finally how to create a Transcribe job.

We’re nearly there. In the next blog, we’ll run through the entire process of getting our code in S3, creating the lambda function, creating the event subscription, and triggering our function.

thanks for reading! Feedback always welcome. 🙂




Beginnings in Golang and AWS – Part V – Events, Lambda and Transcribe (cont’d)


In todays post we’ll cover an event handler that our Lambda function is going to use when it receives notification of an MP4 file being dropped in our S3 bucket from a subscribed S3 event. This will in turn cover the Context and Event objects. Lastly, we’ll look at the one specific to our function, S3Event.

Our Code

Because we’re only covering the handler and background info on the same and events, the code within the function is removed for this post.


Lambda Function Handlers for Go

For building a Lambda function handler in Go, you have a degree of scope with regards to the input and output parameters you use, provided they conform to, per the latest documentation, the following rules.

  • The handler may take between 0 and 2 arguments. If there are two arguments, the first argument must implement context.Context.
  • The handler may return between 0 and 2 arguments. If there is a single return value, it must implement error. If there are two return values, the second value must implement error.

Although not strictly required for our function, Handler, we are using two parameters. The first, per the requirements of above, will be the implementation of context.Context. The second is the actual event data.

Context Object

The service which calls your Lambda function carries metadata, which the developer may find useful to view, or use. This is where the Context object comes into play. When your function signature contains a parameter for this, information is passed into this. There’s a plethora of information that can be available, some of which are service specific and others standard. An example of one of the latter is the AwsRequestID, a unique identifier that can be used as a reference later should AWS support be required. The complete documentation for the Context object is available here:

Event Data

This is the core information passed from the service to the function. It’s format is completely based on said service. In order to manage this, the Go SDK features interfaces for most event sources. In our case, this is the events.s3Event one.

If you wish to look at it’s construction in more detail, you can find it in the s3.go file, located within the events directory of the aws-lambda-go package.

We’ll be setting up an event subscription so that once an MP4 file is dropped into our S3 bucket, it invokes the Lambda function. What does the typical S3 event data our function would be passed look like? Look below.

Here’s the type of information we could expect to see once we have our Lambda function fully in place and an event subscription created to our S3 bucket. More on the latter later.

There is a lot of information there, but the key part of information passed that we’ll be using is contained within the object section.

In this post, we’ve covered the basics of a Lambda event handler for Go, the valid signatures that can be used with it and their purpose. We’ve also looked at the typical information that we can expect to be passed into our S3 event.

In the next blog, we’ll dig deeper into the function and the code within.

thanks for reading! Feedback always welcome. 🙂




Beginnings in Golang and AWS – Part IV- Events, Lambda and Transcribe


The previous posts have taken us through the process of creating a Go executable for uploading a file to S3. We’ll now focus on the next stage of our project. Namely, creating a Transcribe job automatically when an mp4 file is dropped into an S3 bucket.

During these posts, we’ll be covering our code, S3 Events, Lambda, Cloudwatch and Transcribe. These areas will include, amongst others, the CreateObject event, subscriptions, handlers, marshalling, creating a reusable package, logging, reference date format, string slices, and a bit of a deeper look into structs.


Let’s recap our target by the end of this group of blogs. We want to setup a configuration that responds to an mp4 file being placed into an S3 bucket and runs code that will take the information, including the key, and from this create a job in Amazon Transcribe. Because our code will be running remotely, we also want to have some way to log information during execution, such as an action being undertaken or an error if one has occurred.

Our Code

As before, let’s start with our code, and then break it down.


We’re using several other packages in this code, some of which we’ve already used.

  • context
    • We will be using the context package, and particularly the Context struct as part of our Lambda function. This allows our Lambda function to obtain metadata from AWS Lambda. Although not per se required, it’s interesting to cover the type of information available.
  • json
    • implements encoding and decoding of JSON.
  • fmt
    • input and output functions, such as Printf
  • log
    • we use log to provide formatted output which will be used by Cloudwatch
  • strconv
    • is used in this project to allow us to perform some formatting on time and date information
  • time
    • for displaying and measuring date and time information
    • this package is split into separate Go files, representing the various AWS services which support events.
    • functions, primarily for dealing with lambda handlers
    • the generic aws package
    • used for creating session clients and storing configuration information about the session
    • this package is used for our operations involving the Transcribe service.

GUID function

The purpose of this function is to generate a unique identifier that can be used for our Transcribe’s job number. I chose an arbitrary format for this.

The function introduces us for the first to the time package and two of its functions, Parse and Since.
From an operational point of view, Parse is used to decode a string and cast it into a time object. Since provides information on the period of time that has elapsed since a given date/time. These on their own are fairly straightforward to understand. Then we go onto reference date/time format…

Reference Date/Time Format

One area where Go differs from any other language I’ve worked with to date is on how it deals with parsing and formatting dates and times. Instead of using classic identifiers (such as hh, mmm, ss), it uses an actual reference based format to indicate how it should be interpreted. Confused? I was!
In we look at the code for the time.format package, we can see a set of constants that are used to define these reference points. The comments on the right hand side are the actual values associated with it.

Let’s say we have a string 01-01-1970, aka 1 January 1970. We want Go to take this string and covert it to a Time object. The interpreter needs to know what represents what though.
Looking the list above

01 (our day) uses as its indicator 02
01 (our month) uses as its indicator 01
1970 (our year) uses as its indicator 2006

So our parsing string (including the dashes) for 01-01-1970 is 02-01-2006

Back to the remainder of our GUID function code :-

The time.Parse function takes as input the layout format and the string to be parsed. Now when we look at this code again, it starts to make sense:

Then, we use the ad variable as a parameter in the function time.Since, assigning strsince to value of the number of nanoseconds since that moment.

When converting the result to a string, we specify that the number should be represented as base 10 (aka decimal)

String Slices

Now we’re going to format the results of strsince into a “Windowsesque” GUID format. To do this we’re going to be using substrings with additional formatting characters.

Here’s what’s happening:

  • The value of strsince will be a 19 digit number. In my code I wanted to make it four blocks of aka characters (i.e. 20 characters)
  • For the above, a zero is added onto the beginning of the string.
  • We now get into how Go deals with creating a string slice (aka substring). Go is different from archetypal formats you might have seen for creating a substring.
  • There is no direct substring function, we refer to the string within square braces, like the array format.
    • BUT instead of a [startindex:lastindex] format (with 0 being the first item), Go uses [startindex:lastcharacternumber]
  • For example:

Does not give us a substring of 5678

This produces the string 567

Index 5 is the number 5
Character 8 is 7

When we use the concatenation above, it will result in our forthcoming Transcribe jobs having a name of the following type:-

In this post, we’ve covered the various packages that we’ll be using, the reference date/time format, string slices, and string formatting.

In the next blog, we’re going to kick into S3 events and Lambda.

thanks for reading! Feedback always welcome. 🙂




Beginnings in Golang and AWS – Part III – Uploading to S3 (cont’d)


In the previous post, we covered areas in Go, such as pointers, packages, and variables. We also closed off with using the os.Flags function for parsing of command line parameters.

Reminder: You can find the repo for this entire project at

The specific code for the Upload package is located within src/upload

Todays post will begin to use the specific AWS specific commands, and in doing so, introduce further areas such as returned values, blank identifiers, obtaining the value from a pointer location, nil, and also conditional statements. By the end of this, we’ll be able to compile and run the code, achieving our goal of being able to upload a file to an S3 bucket.

Create a New Upload Session

Now we’re also getting into AWS side of things. In this, we create a new session, storing it in the sess variable. We then also use this variable to create a new uploader object.

There’s quite a few things going on here in this part as well, despite it only being two commands.

  • You’ll probably have already noticed the := operator, mentioned in the previous section of code. What’s different this time though is that there is a comma and _ character on the left hand side as well.
  • In Go, the output from a function is carried out via the return command. Unlike some other languages, in Go if you wish to return more than one value, it does not need to be ‘packaged’ up into an object you later have to parse. Instead, you define one or more names (solely for use within the function), with types to be returned in your function header. At the exit point of the function, you simply use the return statement along with the variables being returned that match up with the declaration. A comma is used to separate these. e.g. return x, y
  • In some circumstances you may not be interested in a specific return value from a function. However, In Go we can use also known as a blank identifier, when the program logical requires a value to be returned, but we do not want to use this value.
  • With reference to the above code, a quick look at the documentation for session.NewSessionWithOptions function tells us it returns both a session object, but also an error object. So in the code above, we are simply receiving, but discarding, the error details returned.

Now define uploader, which will allow us to use the variable upload functions for upload to S3.

Validate the File Exists

We want to make sure that the filename being referred to actually exists before attempting any upload. If the file does not exist, then we want to display the error message, and then exit the program. We use the os.Open function to test this.


  • We now use both variables returned by os.Open
  • What does *filename mean? Well, remember when we assigned this variable, it was  pointer that was returned, not a value. If we were to pass it in a filename, all we would be passing it would be a memory address. To tell Go to pass in the value at the memory address, we prefix this with a *
  • Next, we check what err is set to. We does this via the if err != nil condition
  • The equivalent of is not equal to in go is !=
  • An uninitialized value in Go is referred to as nil, mostly akin to null in other programming languages.

Thus, our condition could read “if the value of err is not unitialized”

The actions to be undertaken if the condition above is true is carried out within the {….} block

  • Use fmt.Println to output to the console the err.Error, which contains the error text
  • Exit the program using os.Exit, returning the error code of 1 back.

Upload the File

The final part is to carry out the upload of the object to an S3 bucket, check if the task has completed successfully.

  • First, we define the value of key. Remember that in S3, there is no such thing as either a file, or a directory. However, we are able to define a key, which will be used for referencing it. On this occasion, we simply set the value of key to the name of the file.
  • On this occasion, we’re only interested if there’s an error occurred, as opposed to the other output of the function.
  • We use the upload.Upload function, supplying it with a pointer to a memory location holding a collection of the type UploadInput, which is in turn a struct.
  • A struct is quite simply a collection of names and values, akin to what we sometimes call hashtables in other languages.
  • In our case, we are submitting values in the struct for Bucket, Key, and Body.
  • What does & mean? In Go, prefixing a variable with an ampersand instructs it to use the memory location of it, as opposed the a value. upload.Upload expects a pointer as the parameter.
  • Finally, we check err in exactly the same manner as previous in the code, outputting the error, if one occurs.

Compiling the code

That’s us finished our first program for AWS in Go! The next step is to compile the program itself.

Start a terminal session and change your current directory to the one in which the .go file is
To carry out the compile action, generating the executable, enter the following :-

  • Building a Go package requires compiling .go in a directory structure. We use the go build statement for that.
  • By default, go build uses an output name that is the same as the .go file without the suffix.
  • This command can be overwritten using -o xxxx, where xxxx is the name of the file you wish generated.

You should see output similar to that of below:

Checking the Help Text

Forgotten how we use the command? If we want to get the help text for the package we’ve just compiled, we can just use:

Giving us the following:

Seem familiar to some code from a blog or two ago?

Running the Code

Let’s run our executable now, using a file I’ve got on my desktop.

Validating Upload

Finally, let’s double check that the file has indeed successfully uploaded.


In this post we’ve seen how values and returned from functions and how we can use them, the use of blank identifiers to ignore information we don’t need returned, obtaining values from a pointer location, the use of nil, conditional statements, and how to compile a package. Lastly, we found out how to get help on a compiled package, and run it with parameters.

This is the first part of our three stage project out of the way. In the next part, and similar to the PowerShell blog post, we’re going to be developing code which will create a Transcribe job, using a media file we’ve uploaded to S3.

However, we’re going to make it much more funky and automagic. So in addition to Transcribe, we’ll be using S3 events, and Lambda. By the end of it, we’ll have a system in place that just requires us to drop a mp4 file into a bucket and through the wonders of Lambda, a Transcribe job will be automatically created for us.

Thanks for reading!


Beginnings in Golang and AWS – Part II – Uploading to S3


With the preambling and prerequisities of Part I out of the way, we can now begin working on writing some code to allow us to upload the MP4 file to an S3 bucket.

In this post, we’ll cover the format of a Go package, how to add packages to an installation of Go, the import statement, and lastly how we go about parsing command line options.

It might not seem a great deal of code, but there’s quite a lot of concepts covered here that are essential to understanding how Go works. We’ll then be primed for the final part of the series on S3, which will cover the rest of the code, compiling, running, and using this program.

Uploading a File to S3

I’ll show the complete code first, and then break it down into parts.

Code Breakdown

Let’s go through the code and get a feel for what’s happen here.

VERY important! Go is case sensitive. Capitalization is treated differently, both from a name interpretation, and also operational point of view.

The Package Declaration

Every Go program consists of one or more packages. For a program to run (as opposed to being a resource for another program), it requires a main package.

Define Packages to be Used

Multiple packages exist for Go, both as part of a default installation, and also from the community.

The import statement tells Go what packages (and consequently resources such as functions and types) are available to the program. We can either do separate import statements, or group them together like above.

Go has a default package directory setting for packages not included in the default installation, from which it attempts to find the package (typically ~/go/src).

For example,, referred to in the import statement above, is located at the following location under my home directory:

When you want to use a resource in a package, such as a function or type, you need to refer to it including the package name. So if we wanted to use the Printf function within the fmt package with write a message, an example of this would be:

fmt.Printf(“No Hello World today”)

Define the Main Function

The entry point for a file to be executable (as opposed to solely a resource) is the main() function. The code executed within the function, represented by the dots, is enclosed within { and } braces.

Configure Command Line Parameter Parsing

When we execute this program from the command line, we want to include parameters which will define both the s3 bucket we want to upload to, and also the source file. The values need to be parsed and assigned to variables. To make it easier, we also want to provide some help text for people running the program.

Several things happen with the above code, so let’s go through them.

  • Both the bucket and filename variables are defined. Go normally requires a variable and its type to be pre-declared before it can be used. However, it is possible to create a variable with its type, and assign a value to it by using  :=  Quite simply what this does is leave it to the right of the operator to provide and type and value information. In this case, it is using the result of the String function in the flag package.
  • We use the flag package. The flag package has functions that allow us to parse the command line. We use flag.String to define a string flag with specified name, default value, and usage string. The return value is a reference to the memory address (aka pointer) which stores the value of the flag.
  • The Parse function is called. This carries out the processing of the command line parameters. This function sets the values at the memory location referred to by bucket and filename
  • It’s worthwhile mentioning that the output that will be generated if help is requested on our program, once compiled, is defined in this code as well. We’ll see in the last part on the S3 Uploader just exactly how this works.
  • You also might be wondering why the function name is capitalized. This is because in order for a resource in a package to be used by another, the initial letter must be a capital one. This marks it as “exportable”, allowing its use elsewhere.


In this post, we’ve covered a lot of topics, such as how we can use existing packages with our go program, how packages are stored locally, the effect that using lower and uppercase letters can have, the requirements for a program in go, and the import statement. We also began to delve in assignment by inference, pointers, flags  and how we can parse them.

Wit these out of the way, we’re primed and pretty much tickety boo for the final part of the series on S3, which will cover the rest of the code, introducing further concepts and syntax, including compiling, running, and using this program.

Thanks for reading!


Beginnings in Golang and AWS – Part I

Part I – Introduction, Go, and Prerequisites

The series of blogs describe my journey learning Golang so far, with particular reference to AWS services. In them, I’ll be covering how I was able to achieve a task with AWS services with Go, but also what I learnt about Go at the same time. In addition to some services already mentioned in previous blogs, we’ll also be covering Lex, API Gateway, Lambda, and Alexa Skills.


DSL’s aside, it’s been quite some time since I’ve endeavored to learn a new language in IT. Aside from a smattering of parsing through some Ruby and C# code on occasions and a bit of Python with BOTO3 for AWS stuff, it’d be fair to say the last one I learned to any level of depth was PowerShell.


Already providing support for Node.js, Java, C# and Python with Lambda, AWS announced in January this year the addition of Go (short and more commonly used form of Golang). I’ve seen a lot of enthusiasm around the communities about Go, and with it marked as cross platform, fast, and of a relatively simple nature (sorta), I decided I’d give this a bash (no pun intended).

Whether it’s because I’m over-enthusiastic or (more likely) of a completely disorganized mind, my usual modus operandi involves skipping “hello world” learning completely and just diving in. Most certainly not a purist compliant approach…
I figured a good way to try this would be to adapt some of the PowerShell scripts I’ve written previously that feature here. Unoriginal for sure, but as it’s fresh in my mind, why not try with Golang to perform the creation of an SRT file?

Previously discussed in the PowerShell blogs, the task effectively comprises uploading a media file to an S3 bucket, creating a Transcribe job, obtaining the results, and then converting them.

As we’ll see later in this series of blogs though, additional options are available for how we carry some of these tasks out, making further automation and input sources possible.

For now though, we’ll get the pre-requisites out of the way.


  • An existing AWS account. You can sign up for a Free Tier account here
  • Credentials and configuration are setup. See here for information on how to do this.
  • Go is installed (see below)
  • AWS SDK for Go is installed (see below)
  • You have cloned the repo for the project from either its source or your own fork
  • You’ve a ready-to-hand MP4 media file
  • You have a suitable development environment, such as VS Code

Install Go

Naturally, you are going to need Go installed. You can find your appropriate binary at the downloads page of I’ll leave it to you to follow the specific installation instructions and any steps needed to fulfill requirements.

Install the AWS SDK for Go

You’ll also need the AWS SDK. Go has a really nice way to add packages to your environment, which is simply achieved by using the go get xxxxx, where xxxxx is the location of the package from the command line. Go handles the rest.

For installation of the AWS SDK for Go, simply use the following:

go get

Install VS Code (optional)

I currently use VS Code, which has some nice extensions to do tasks such as formatting, and automatic adding and removal of import commands. If you’re not currently using a development environment, or just simply fancy trying it out, you can obtain the binaries (available for Windows, OSX, and Linux) from here:

Add the Go Extention to VS Code

Follow the instructions and after installation is complete, launch VS Code. Add the extention ms-vscode.go, which provides features such as mentioned above within the development environment.


With the prerequisites now in place for us to use Go with AWS services, we can begin the process of putting together code which will help us achieve the tasks we’ve already mentioned.

In the next blog, we’ll dive into writing a script that will upload our media file to S3 using the SDK, and at the same time learning about a bit about Go.

Thanks for reading!


When Marvin Gaye Met Amazon Transcribe & PowerShell – Automating Subtitle Creation – Part III

“I Heard it Through the Grape Vernon”

Part two of this series saw us put the code in place to allow us to upload the media file, create the transcription job, and download the results. With this complete, it’s time to move on to the next, and final, stage. That is, processing the JSON file, and creating an SRT file from it.

NB. You can find the code used in this blog, and additional documentation, at my Github project, aws-powershell-transcribe2srt:

Initially, we’ll read the contents of the json file into a variable and specifically use the items section, which contains the word-by-word breakdown. At the same time, some other variables are set with defaults.

Next, the Transcription variable needs to be processed. There’s several things that need to be taken into account:

  • The obvious one is that we need to parse through each item in the object, requiring a loop.
  • We also want to ensure that the number of words displayed per line does not exceed recommendations
  • In relation to the above, we also need to define the end time for each sequence the same as the last word.
  • Lastly, punctuation needs to be taken into account.

Outer Loop – Beginning

Using a while loop, the variable $strlen is set to zero. This variable will contain a count of the number of characters in the current sequence being processed.

Process the Start Time Attribute

Then, the time at which the word was said, start_time is read. In its original json format, this is as below.

However, the format for an SRT file of a time element consists of the number of hours, minutes, seconds, and milliseconds. All but the last of these use a fixed two character zero-padded digit format. The last uses three digits.

We’ll convert this string into the required style by first converting it to a timespan object, and then using string formatting to set it as required. Variables are also set for the subtitle text, the sequence number, and a flag to indicate that we are beginning the first line of this subtitle sequence. A variable is set for the end time too, and the process continues until 64 characters have been exceeded.

Inner Loop

An inner loop is also required, since we want the subtitles to be refreshed after two lines and with a maximum of 64 characters. Whether the item is a pronunciation (aka word) or punctuation needs to be taken into account, as well as setting the end time marker for the sequence when two lines have been occupied.

The type of item is read (pronunciation or punctuation), its content, and the length of the string tally is increased in line with this. Based on the type of item, the subtitle string is appended accordingly. When the length of the string exceeds 32 characters, and we are still on the first row, a new line character is added, and the variable indicator set to indicate that the subsequent content will be on the second line.

Outer Loop – End

When the inner loop is complete, it signifies a new sequence is ready. This part simply creates the appropriate representation of the sequence as a string, and appends it to the variable holding the entire contents of what will become the SRT file.

Writing the SRT File

Lastly, the contents of the $srtinfo variable are written to file.

Viewing the Results

At this point, it really depends on what you want to do with the SRT file and accompanying media one. Media players like VLC allow you to manually add an SRT file to a playing video, and most current televisions with USB will be happy to display the subtitles provided the filenames (without extension) match.
If you really want to go full (non-pron) hardcore, you could add the SRT info as a stream directly into the media file, using a tool like FFMPEG, which allows you to multiplex, or even to “burn” the subtitles onto the video. Using this method, vloggers really wanting to reach their audience could make multiple language subtitles available in this video file.


The combination of AWS services S3 and Transcribe, coupled with PowerShell and their AWS module for it, make it a relatively straightforward process to obtain a transcription of a media file which can be converted to SRT format for later use.

Also, as a final word, bear in mind that as a service meant for transcribing relatively silent environments with spoken (not sung) words, sometimes the results of a media file can be a little bit…misunderstood. Often with humerous results… 🙂

Marvin’s on his own

Let’s help him with some words

Thats Marvin SRTd

Wrong words but’s let’s karaoke anyway

Thanks for reading, and feedback always welcome.

Coming soon…a slight departure from the norm with an advanced version of this but using Golang.

Think i need to see about getting me another domain name…


When Marvin Gaye Met Amazon Transcribe & PowerShell – Automating Subtitle Creation – Part II

“I Heard it Through the Grape Van”

With the background set in the previous post for what we’ll be aiming to achieve, it’s time to move forward with getting things into gear.
Todays post covers how to upload the media file to s3, create the Transcribe job to process it, and finally download the results locally.

Quick Recap

This projects demonstrates the use of the AWS Transcribe service and PowerShell to create an SRT (subtitle) file from a media file.

Our project makes use of:

  • PowerShell Core
  • AWS PowerShell Core Cmdlets
  • AWS S3
  • AWS Transcribe Service
  • An MP4 video file


Before going into the nitty gritty, you need to ensure all of the following are in place:

  • An existing AWS account. You can sign up for a Free Tier account here
  • Credentials and configuration are setup. See here for information on how to do this.
  • PowerShell Core is installed (available cross platform here)
  • The AWS PowerShell Net Core module is installed.
  • You’ve a ready-to-hand MP4 media file
  • You have cloned the repo for the project from either its source or your own fork

Sequence of Events to Transcribe the File

The order of events that need to happen is relatively straightforward:

  • Upload the file to S3
  • Create a Transcribe job
  • Wait for job to finish
  • Download the JSON file

Upload the file to S3

We’ll start out defining some variables and defaults to make things a bit easier, then the Write-S3Object cmdlet takes care of itself:

Create a Transcribe job

All Transcribe jobs have an associated job name associated with this. For this script, I’ve used the GUID class to create a unique one. We define this and the name of the results file that will be used when it’s downloaded from a completed job. Then the Start-TRSTranscriptionJob cmdlet is used to initiate the task. The $s3uri variable is used to tell Transcribe where to get the file it is to process.

Wait for job to finish

A basic loop is put in place which checks the status of the Transcribe job every five seconds. The loops continues until the job status changes from IN_PROGRESS, indicating either a failure or completion of it.

Download the JSON file

When a job has successfully executed, visible by its COMPLETED status, it stores the result in an s3 bucket of its own choice. The location is not in your own personal bucket, and has an expiry life. By querying the TranscriptFileUri property of the job status, we can get the location where it is stored. You’ve then got the choice of using the S3 Manager cmdlet for downloading the file, or alternatively (in this case), simply with Invoke-Webrequest.

Part III will cover the largest part of the process, converting the job results into the SRT file we’ll use with the original video.
Thanks for reading!


When Marvin Gaye Met Amazon Transcribe & PowerShell – Automating Subtitle Creation – Part I

“I Heard it Through the Grape Van”

It’s been a while since my last blog, so I had to try and think of something a bit more eye-catching than the previous ones for the title. 🙂  That said, the title and heading are actually very accurate… #mysterious

These set of posts cover how one of the AWS services, Transcribe, can be used, in this case in combo with PowerShell, to create a subtitles file for any video, which can then be used for viewing. There’s quite a bit of content, so as mentioned, it’s being split across several posts.

Todays post provides a background to the main parts that will be used. These are two AWS services Amazon S3 and Amazon Transcribe, a subtitle file, and AWS Tools for PowerShell Core Edition, and a video of the legend himself, Marvin Gaye.

Amazon Transcribe/S3

Amongst the plethora of services AWS offer is Transcribe, or to be more precise, Amazon Transcribe. Part of AWS’s group of Machine Learning offerings, Transcribe’s role is fairly straightforward. Feed it a supported media file (FLAC, MP3, MP4 or WAV) from a bucket on S3 and it will process the file, endeavoring to provide as best as possible a transcription of it. Upon successful completion of a job, a JSON formatted file becomes available for download.

The file itself contains a summary of the conversion at its beginning:

Which is then followed by a breakdown of the job. This consists either of data about the next word identified (start and end time, the word, a ‘confidence’ rating from the service that it has correctly identified the word, and its classification…

…or if its found an appropriate place that would use punctuation.

Unlike the other formats supported, MPG4 can also (and usually does), consist of one of more additional streams than audio. Typically this will be video content, but it might also include additional streams for other audio (think different languages, or directors/producers comments for example) or subtitles.

Subtitle Files

At their core, subtitle files simply contain textual descriptions of the content of its accompanying video file. This is typically dialogue, but also other notifications, such as the type of music being played, or other intonations. Accompanying these are timespan indicators, which are used to match this information up with the video content.

The most common file format in use is the Subrip format, better recognised by its extension of SRT. These files are arranged in a manner similar to below:

Line by line respectively, these consist of :
  • The numeric counter identifying each sequential subtitle
  • Start and end time for the subtitle to be visible, separated by the marker you see.
  • The text itself, typically between one and two lines, and ideally restricted to a number of characters per line
  • A blank line indicating the end of this sequence.

Looking at the two different forms of text data in Transcribe and SRT format respectively, you’ll probably have already noticed that the former contains enough information that should allow, with a bit of transformation, the output to be in the latters.

AWS Tools for PowerShell Core Edition

PowerShell Core is Microsoft’s cross platform implementation of PowerShell and as such can pretty much run on any platform that has .NET Core installed on it. AWS provide a module for this platform, AWS Tools for PowerShell Core Edition. Consisting of, at present, 4136 cmdlets, it pretty much covers all of the broad spectrum of services available from the provider. Amongst these are the set of ones for the Transcribe service, ironically numbering only three.

Marvin Gaye

Needing no introduction whatsoever, the posts over the next day or so make use of an MP4 file of the legend singing I Heard it Through the Grapevine acapella. If you really feel the need to follow along exactly, then its fairly straightforward to find and download. It’s most definitely worth a listen in any case if you’ve not heard it already.

With all the background set, part II will kick in properly with getting setup for the script and the beginning of its implementation.



Using PowerShell to get data for Microsoft Ignite

I was registered for Ignite yesterday (yipeeee!!), and decided to take a look at the session list. 

Navigation and search is a bit of a chore, so I set out to see if I could get the information I needed via PowerShell. If so, I was free to obtain whatever data wanted quickly.

Here’s what i came up with. After the script a couple of examples of post querying the data are given. Note that instead of querying the web services each time for data, I’ve just downloaded all the data, and query it locally. This isn’t really best practice, but (IMO) the low size of the dataset mitigates this to some extent.

Recommendations for improving or additions are more than welcome. 🙂 It will be posted to GitHub shortly.