# Thursday, December 28, 2017

Generating a thumbnail image from a larger image sounds easy – just shrink the dimensions of the original, right? But it becomes more complicated if the thumbnail image is a different shape than the original. For example, the original image may be rectangular but we need the new image to be a square. Or we may need to generate a portrait-oriented thumbnail from a landscape-oriented original image. In these cases, we will need to crop or distort the original image when we create the thumbnail. Distorting the image tends to look very bad; and when we crop an image, we want ensure that the primary subject of the image remains in the generated thumbnail. To do this, we need to identify the primary subject of the image. That's easy enough for a human observer to do, but a difficult thing for a computer to do. But if we want to automate this process, we will have to ask the computer to do exactly that.

This is where machine learning can help. By analyzing many images, Machine Learning can figure out what parts of a picture are likely to be the main subject. Once this is known, it becomes a simpler matter to crop the picture in such a way that the main subject remains in the generated thumbnail.

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

The Cognitive Services Vision API uses Machine Learning so that you don't have to. It exposes a web service to return an intelligent thumbnail image from any picture.

You can see this in action here.

Scroll down the the section titled "Generate a thumbnail" to see the Thumbnail generator as shown in Figure 1. 

Figure 1

With this live, in-browser demo, you can either select an image from the gallery and view the generated thumbnails; or provide your own image - either from your local computer or from a public URL. The page uses the Thumbnail API to create thumbnails of 6 different dimensions.
For your own application, you can either call the REST Web Service directly or (for a .NET application) use a custom library. The library simplifies development by abstracting away HTTP calls via strongly-typed objects.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.


To use this API, you simply have to make a POST request to the following URL:

where [location] is the Azure location where you created your API key (above) and ww and hh are the desired width and height of the thumbnail to generate.

The “smartCropping” parameter tells the service to determine the main subject of the photo and to try keep it in the thumbnail while cropping.

The HTTP header of the request should include the following:

The Cognitive Services Computer Vision key you generated above.


This tells the service how you will send the image. The options are:   

  • application/json    
  • application/octet-stream    
  • multipart/form-data

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

where imageurl is a public URL pointing to the image. For example, to generate a thumbnail of this picture of a skier, submit the following JSON:


Man skiing  alps

If you plan to send the image itself to the web service, set the content type to either "application/octet-stream" or "multipart/form-data" and submit the binary image in the body of the HTTP request.

Here is a sample console application that uses the service to generate a thumbnail from a file on disc. You can download the full source code at

Note: You will need to create the folder "c:\test" to store the generated thumbnail.


             // TODO: Replace this value with your Computer Vision API Key
            string computerVisionKey = "XXXXXXXXXXXXXXXX"

            var client = new HttpClient();
            var queryString = HttpUtility.ParseQueryString(string.Empty);

            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);

            queryString["width"] = "300";
            queryString["height"] = "300";
            queryString["smartCropping"] = "true";
            var uri = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/generateThumbnail?" + queryString;

            HttpResponseMessage response;

            string originalPicture = "http://davidgiard.com/content/Giard/_DGInAppleton.png";
            var jsonBody = "{'url': '" + originalPicture + "'}";
            byte[] byteData = Encoding.UTF8.GetBytes(jsonBody);

            using (var content = new ByteArrayContent(byteData))
                 content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                response = await client.PostAsync(uri, content);
            if (response.StatusCode == System.Net.HttpStatusCode.OK)
                 // Write thumbnail to file
                var responseContent = await response.Content.ReadAsByteArrayAsync();
                 string folder = @"c:\test";
                string thumbnaileFullPath = string.Format("{0}\\thumbnailResult_{1:yyyMMddhhmmss}.jpg", folder, DateTime.Now);
                using (BinaryWriter binaryWrite = new BinaryWriter(new FileStream(thumbnaileFullPath, FileMode.Create, FileAccess.Write)))
                // Show BEFORE and AFTER to user
                Console.WriteLine("Done! Thumbnail is at {0}!", thumbnaileFullPath);
                Console.WriteLine("Error occurred. Thumbnail not created");


The result is shown in Figure 2 below.
Figure 2

One thing to note. The Thumbnail API is part of the Computer Vision API. As of this writing, the free version of the Computer Vision API is limited to 5,000 transactions per month. If you want more than that, you will need to upgrade to the Standard version, which charges $1.50 per 1000 transactions.

But this should be plenty for you to learn this API for free and build and test your applications until you need to put them into production.
The code above can be found on GitHub.

You can find the full documentation – including an in-browser testing tool - for this API here.

The Cognitive Services Custom Vision API provides a simple way to generate thumbnail images from pictures.

Thursday, December 28, 2017 10:31:00 AM (GMT Standard Time, UTC+00:00)
# Wednesday, December 27, 2017

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

Your application uses Cognitive Services by calling one or more RESTful web services. These services require you to pass a key in the header of each HTTP call. You can generate this key from the Azure portal.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account, navigate to the Azure Portal.

Figure 1

Here you can create a Cognitive Services API key. Click the button in the top left of the portal (Figure 2)

Figure 2

It’s worth noting that the “New” button caption sometimes changes to “Create a Resource” (Figure 2a)

Figure 2a

From the flyout menu, select AI+Cognitive Services. A list of Cognitive Services displays. Select the service you want to call. For this demo,I will select Computer Vision API, as shown in Figure 3.

Figure 3

The Face API blade displays as shown in Figure 4.

Figure 4

At the Name textbox, enter a name for this service account.

At the Subscription dropdown, select the Azure subscription to associate with this service.

At the Location dropdown, select the region in which you want to host this service. You should select a region close to those who will be consuming the service. Make note of the region you selected.

At the Pricing Tier dropdown, select the pricing tier you want to use. Currently, the choices are F0 (which is free, but limited to 20 calls per minute); and S1 (which is not free, but allows more calls.) Click the View full pricing details link to see how much S1 will cost.

At the Resource Group field, select or create an Azure Resource Group. Resource Groups allow you to logically group different Azure resources, so you can manage them together.

Click the [Create] button to create the account. The creation typically takes less than a minute and a message displays when the service is created, as shown in Figure 5.

Figure 5

Click the [Go to resource] button to open a blade to configure the newly-created service. Alternatively, you can select "All Resources" on the left menu and search for your service by name. Either way, the service blade displays, as as shown in Figure 6.

Figure 6

The important pieces of information in this blade are the Endpoint (on the Overview tab, Figure 7) and the Access Keys (on the Keys tab, as shown in Figure 8). Within this blade, you also have the opportunity to view log files and other tools to help troubleshoot your service. And you can set authorization and other restrictions to your service.

Figure 7

Figure 8

The process is almost identical when you create a key for any other Cognitive Service. The only difference is that you will select a different service set in the AI+Cognitive Services flyout.

Wednesday, December 27, 2017 10:35:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, December 26, 2017

Microsoft Cognitive Services is a set of APIs that take advantage of Machine Learning to provide developers with an easy way to analyze images, speech, language, and others.

If you have worked with or studied Machine Learning, you know that you can accomplish a lot, but that it requires a lot of computing power, a lot of time, and a lot of data. Since most of us have a limited amount of each of these, we can take advantage of the fact that Microsoft has data, time, and the computing power of Azure. They have used this power to analyze large data sets and expose the results via a set of web services, collectively known as Cognitive Services.

The APIs of Cognitive Services are divided into 5 broad categories: Vision, Speech, Language, Knowledge, and Search.

Vision APIs

The Vision APIs provide information about a given photograph or video. For example, several Vision APIs are capable of recognizing  faces in an image. One analyzes each face and deduces that person's emotion; another can compare 2 pictures and decide whether or not 2 photographs are the same person; a third guesses the age of each person in a photo.

Speech APIs

The Speech APIs can convert speech to text or text to speech. It can also recognize the voice of a given speaker (You might use this to authenticate users, for example) and infer the intent of the speaker from his words and tone. The Translator Speech API supports translations between 10 different spoken languages.


The Language APIs include a variety of services. A spell checker is smart enough to recognize common proper names and homonyms. And the Translator Text API can detect the language in which a text is written and translate that text into another language. The Text Analytics API analyzes a document for the sentiment expressed, returning a score based on how positive or negative is the wording and tone of the document. The most interesting API in this group is the Language Understanding Intelligence Service (LUIS) that allows you to build custom language models so that your application can understand questions and statements from your users in a variety of formats.


Knowledge includes a variety of APIs - from customer recommendations to smart querying and information about the context of text. Many of these services take advantage of natural language processing. As of this writing, all of these services are in preview.


The Search APIs allow you to retrieve Bing search results with a single web service call.

You can use these APIs. To get started, you need an Azure account. You can get a free Azure trial at https://azure.microsoft.com/.

Each API offers a free option that restricts the number and/or frequency of calls, but you can break through that boundary for a charge.  Because they are hosted in Azure, the paid services can scale out to meet increased demand.

You call most of these APIs by passing and receiving JSON to a RESTful web service. Some of the more complex services offer configuration and setup beforehand.

These APIs are capable of analyzing pictures, text, and speech because each service draws on the knowledge learned from parsing countless photos, documents, etc. beforehand.
You can find documentation, sample code, and even a place to try out each API live in your browser at https://azure.microsoft.com/en-us/services/cognitive-services/

A couple of fun applications of Cognitive Services are how-old.net (which guesses the ages of people in photographs) and what-dog.net (which identifies the breed of dog in a photo).

Below is a screenshot from the Azure documentation page, listing the sets of services. But keep checking back, because this list grows and each set contains one or more services.

List of Cognitive Services
Sign up today and start building apps. It’s fun, it's useful, and it’s free!

Tuesday, December 26, 2017 10:25:00 AM (GMT Standard Time, UTC+00:00)
# Monday, December 25, 2017
Monday, December 25, 2017 9:48:00 AM (GMT Standard Time, UTC+00:00)
# Sunday, December 24, 2017

I have been recording my online TV show - Technology and Friends - for 9 years. I recently passed episode #500.

The show has evolved over the years and so has the recording equipment I use.

Below is a description of the hardware I use to record Technology and Friends.

Camera: Canon EOS6D

CameraThis is the second Canon SLR I’ve purchased. My EOS 30D lasted over 10 years, so I returned to a similar, but updated model when it finally began to fail. The EOS 6D is primarily a still camera, but it can record up to 30 minutes of high-resolution video. The image quality is outstanding, particularly with the 24-105mm Canon lens I bought with it. This setup is overkill (read: "expensive") for a show that most people view in a browser, but I also use this camera for still photography and I have been happy with the results. The main downside for video is the 30-minute limit. After this time, someone needs to re-start the recording.

Audio Recorder: Xoom H6 Handy Recorder

AudioRecorderI bought a Xoom recorder a few years ago on the recommendation of Carl Franklin, who is the co-host and the audio expert of the excellent .NET Rocks podcast. It served me well for years, so I bought the H6 when it was time to replace it. This device contains 2 built-in microphones, but I almost always plug in 2 external microphones, so I can get closer to a speakers mouth. I can plug in up to 4 external microphones. Using these microphones eliminates most of the background noise, allowing me to record in crowded areas. Each microphone can record to a separate audio file, which is convenient if one speaker is much louder than another.

Microphones: Shure SM58

MicrophonesI went with Shure based on popularity and Amazon reviews. I bought these mid-level models. I have been happy with the results. I strongly recommend external microphones (either lapel or handheld) when recording audio. My show is much better since I began using them. Switching to a separate microphone and the resulting increase in audio quality is probably the technical change resulting in the single biggest jump in quality for my show.

Tripod: Vanguard Lite1

TripodThis is a cheap tripod, but it has lasted me for years. I have a larger tripod, but seldom use it because I can throw the vanguard is small enough to keep in a backpack, carry on a plane, and carry around a conference. I also like the fact that I can set it on a tabletop, which is what I usually do. It is not quite tall enough to stand on the ground and hold the camera as high as the face of a standing adult.

Sunday, December 24, 2017 5:56:17 PM (GMT Standard Time, UTC+00:00)
# Friday, December 22, 2017

Me and Roy Roy Ayers is 77 years old and stutters when he talks. But not when he sings. And definitely not when he plays the vibraphone. And play he did last night in front of a packed house at The Promontory in Hyde Park.

Ayers mixed a few ballads with the jazz-funk that he helped define. Backed by a band consisting of bass, drums, keyboard, and another vocalist, Ayers played for about 90 minutes, drawing on his 99 albums with such songs as "Red, Black & Green", "Don't Stop the Feeling", and his interpretation of Sam Cooke's "You Send Me".

The keyboardist was the best of the bunch, coaxing a variety of sounds from his instrument during his many solos. I wondered why the stage setup hid so much of him from the audience's view.

And then there was Roy and his vibraphone. Ayers still sounds great when he does his thing with his vibes.

Also Me and RoyI bought a ticket at the door and had to stand in the back with some folks who decided it was ok to engage in loud conversation at the concert. But I had a chance to shake the hand of Mr. Ayers after the show and tell him how much I enjoyed his music.

And to wish him luck on his next 99 albums.

Friday, December 22, 2017 10:33:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, December 21, 2017

"Mirror Dance" by Lois McMaster Bujold is the sequel to "Brothers In Arms", the novel that introduced Miles Vorkosigan's clone / brother Mark.

mirror_danceFollowing Miles's rescue of Mark in the previous novel, the brothers return to Miles's home planet of Barrayar, where Mark decides to launch a rescue mission to liberate clones who are intended to be used as replacement parts for their genetic donors. Miles follows and is gravely wounded in the ensuing battle. His body is cryogenically frozen and then disappears. Mark returns to Barrayar to deliver the news to their parents - Lord Aral and Lady Cordelia. Arel and Cordelia  accept Mark as their son and a potential heir to the Vorkosigan line. Ultimately, Mark launches another rescue mission, this one to find and save Miles, who has been revived by scientists on an enemy planet.

This is one of Bujold's strongest novels. She not only tells a complex story, but she dives further into the emotions of her characters - particularly the clone Mark.

The definition of humanity and the rights that go with it are common themes of Bujold's books and this one delves into it very well, if a little heavy-handed. Interwoven with this general question is Mark’s personal struggle to define his own identity. He desperately wants to define himself outside of just the clone of a heroic Lord. But, his struggle to do so often leads to failure. Others help him with the struggle. He was raised to assassinate Miles's father, but ends up being accepted by his potential victim and his new family.

Thursday, December 21, 2017 11:14:00 AM (GMT Standard Time, UTC+00:00)
# Monday, December 18, 2017
Monday, December 18, 2017 5:36:00 PM (GMT Standard Time, UTC+00:00)
# Monday, December 11, 2017
Monday, December 11, 2017 11:48:00 AM (GMT Standard Time, UTC+00:00)
# Friday, December 8, 2017

BrothersInArmsFor years, Miles has been leading a double life - he was born Lord Miles Vorkosigan who became a lieutenant in the army of the Barryaran empire; but he sometimes assumes the role of Admiral Naismith, leader of the  Dendarii Free Mercenary Fleet.

One day, Miles is forced to appear on the same planet as both of his persona on the same day. Fearing his cover will be blown, he invents a story that Naismith is actually Miles's clone.

Shortly afterward, Miles discovers that he actually does have a clone and that this clone is being used by his enemies in a plot to assasinate Miles.

The book is a good adventure story. It advances the relationship between Miles and Elli (his bodyguard / lover); and it addresses a glaring plot problem - Miles disguises himself as a mercenary Admiral despite his unique physique. It also takes place on future Earth, which is a bonus for those of us who call Earth home today.

Friday, December 8, 2017 7:18:10 AM (GMT Standard Time, UTC+00:00)