# Tuesday, March 15, 2016

It's difficult enough for humans to recognize emotions in the faces of other humans. Can a computer accomplish this task? It can if we train it to and if we give it enough examples of different faces with different emotions.

When we supply data to a computer with the objective of training that computer to recognize patterns and predict new data, we call that Machine Learning. And Microsoft has done a lot of Machine Learning with a lot of faces and a lot of data and they are exposing the results for you to use.

The Emotions API in Project Oxford looks at pictures of people and determines their emotions. Possible emotions returned are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Each emotion is assigned a confidence level between 0 and 1 - higher numbers indicate a higher confidence that this is the emotion expressed in the face. If a picture contains multiple faces, the emotion of each face is returned.

The API is a simple REST web service located at https://api.projectoxford.ai/emotion/v1.0/recognize. POST to this service with a header that includes:

where xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is your key. You can find your key at https://www.projectoxford.ai/Subscription?popup=True

and a body that includes the following data:

{ "url": "http://xxxx.com/xxxx.jpg" }

where http://xxxx.com/xxxx.jpg is the URL of an image.
The full request looks something like:
POST https://api.projectoxford.ai/emotion/v1.0/recognize HTTP/1.1
Content-Type: application/json
Host: api.projectoxford.ai
Content-Length: 62
Ocp-Apim-Subscription-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

{ "url": "http://xxxx.com/xxxx.jpg" }

This will return JSON data identifying each face in the image and a score indicating how confident this API is that the face is expressing each of 8 possible emotions. For example, passing a URL with a picture below of 3 attractive, smiling people

(found online at https://giard.smugmug.com/Tech-Community/SpartaHack-2016/i-4FPV9bf/0/X2/SpartaHack-068-X2.jpg)

returned the following data:

    "faceRectangle": {
      "height": 113,
      "left": 285,
      "top": 156,
      "width": 113
    "scores": {
      "anger": 1.97831262E-09,
      "contempt": 9.096525E-05,
      "disgust": 3.86221245E-07,
      "fear": 4.26409547E-10,
      "happiness": 0.998336,
      "neutral": 0.00156954059,
      "sadness": 8.370223E-09,
      "surprise": 3.06117772E-06
    "faceRectangle": {
      "height": 108,
      "left": 831,
      "top": 169,
      "width": 108
    "scores": {
      "anger": 2.63808062E-07,
      "contempt": 5.387114E-08,
      "disgust": 1.3360991E-06,
      "fear": 1.407629E-10,
      "happiness": 0.9999967,
      "neutral": 1.63170478E-06,
      "sadness": 2.52861843E-09,
      "surprise": 1.91028926E-09
    "faceRectangle": {
      "height": 100,
      "left": 591,
      "top": 168,
      "width": 100
    "scores": {
      "anger": 3.24157673E-10,
      "contempt": 4.90155344E-06,
      "disgust": 6.54665473E-06,
      "fear": 1.73284559E-06,
      "happiness": 0.9999156,
      "neutral": 6.42121E-05,
      "sadness": 7.02297257E-06,
      "surprise": 5.53670576E-09

A high value for the 3 happiness scores and the very low values for all the other scores suggest a very high degree of confidence that the people in this photo is happy.

Here is the request in the popular HTTP analysis tool Fiddler [http://www.telerik.com/fiddler]:





Sending requests to Project Oxford REST API makes it simple to analyze the emotions of people in a photograph.

Tuesday, March 15, 2016 9:57:07 AM (GMT Standard Time, UTC+00:00)
# Monday, March 14, 2016
Monday, March 14, 2016 4:05:00 PM (GMT Standard Time, UTC+00:00)

Generating a thumbnail image from a larger image sounds easy – just shrink the dimensions of the original, right? But it becomes more complicated if the thumbnail image is a different shape than the original. In this case, we will need to crop or distort the original image. Distorting the image tends to look very bad; and when we crop an image, we will need to ensure that the primary subject of the image remains in the generated thumbnail. To do this, we need to identify the primary subject of the image. That's easy enough for a human observer to do, but a difficult thing for a computer to do, which is necessary if we want to automate this process.

This is where machine learning can help. By analyzing many images, Machine Learning can figure out what parts of a picture are likely to be the main subject. Once this is known, it becomes a simpler matter to crop the picture in such a way that the main subject remains.

Project Oxford uses Machine Learning so that you don't have to. It exposes an API to create an intelligent thumbnail image from any picture.

You can see this in action at www.projectoxford.ai/demo/vision#Thumbnail.

Figure 1

With this live, in-browser demo, you can either select an image from the gallery and view the generated thumbnails; or provide your own image - either from your local computer or from a public URL. The page uses the Thumbnail API to create thumbnails of 6 different dimensions.

Figure 2

For your own application, you can either call the REST Web Service directly or (for a .NET application) use a custom library. The library simplifies development by abstracting away HTTP calls via strongly-typed objects.

To get started, you will need a free Project Oxford account and you will need to sign into projectoxford.ai with a Microsoft account.

For this API, you need a key. From the Computer Vision API page, (Figure 3); click the [Try for free >] button; then, click the "Show" link under the Primary key of the "Computer Vision" section (Figue 4).

Figure 3 

Figure 4 

To use the SDK, add the Microsoft.ProjectOxford.Video NuGet package to your project: Right-click on your project, select Manage NuGet Packages, search for "ProjectOxford.Video", select the package from the list, and click the [Install] button, as shown in Figure 5

Figure 5

This adds a reference to Microsoft.ProjectOxford.Vision.dll, which contains classes that make it easier to call this API.

Add the following statement to the top of a class file to use this library.

using Microsoft.ProjectOxford.Vision;

Now, you can use the methods in the VisionServiceClient class to interact with the API.

Create a VisionServiceClient with the following code:

string subscriptionKey = "15e24a988f484591b17bcc4713aec800";
IVisionServiceClient visionClient = new VisionServiceClient(subscriptionKey);

where “xxxxxxxxxxxxxxxxxxxxxxxxxxx” is your subscription key.

Next, use the GetThumbnailAsync method to generate a thumbnail image. The following code creates a 200x100 thumbnail of a photo of a buoy in Stockholm, Sweden.

string originalPicture = @"https://giard.smugmug.com/Travel/Sweden-2015/i-ncF6hXw/0/L/IMG_1560-L.jpg";
int width = 200;
int height = 100;
bool smartCropping = true;
byte[] thumbnailResult = null;
thumbnailResult = visionClient.GetThumbnailAsync(originalPicture, width, height, smartCropping).Result;

The result is an array of bytes, but you can save the corresponding image to a file with the following code:

string folder = @"c:\test";
string thumbnaileFullPath = string.Format("{0}\\thumbnailResult_{1:yyyMMddhhmmss}.jpg", folder, DateTime.Now);
using (BinaryWriter binaryWrite = new BinaryWriter(new FileStream(thumbnaileFullPath, FileMode.Create, FileAccess.Write)))

Below is the full listing in a Console App to generate a thumbnail; then open both the original image and the saved thumbnail image for comparison.

using System;
using System.Diagnostics;
using System.IO;
using Microsoft.ProjectOxford.Vision;
namespace ThumbNailConsole
    class Program
        static void Main(string[] args)
            string subscriptionKey = "15e24a988f484591b17bcc4713aec800";
            IVisionServiceClient visionClient = new VisionServiceClient(subscriptionKey);
            string originalPicture = @"https://giard.smugmug.com/Travel/Sweden-2015/i-ncF6hXw/0/L/IMG_1560-L.jpg";
            int width = 200;
            int height = 100;
            bool smartCropping = true;
            byte[] thumbnailResult = null;
            thumbnailResult = visionClient.GetThumbnailAsync(originalPicture, width, height, smartCropping).Result;
            string folder = @"c:\test";
            string thumbnaileFullPath = string.Format("{0}\\thumbnailResult_{1:yyyMMddhhmmss}.jpg", folder, DateTime.Now);
            using (BinaryWriter binaryWrite = new BinaryWriter(new FileStream(thumbnaileFullPath, FileMode.Create, FileAccess.Write)))
            Console.WriteLine("Done! Thumbnail is at {0}!", thumbnaileFullPath);

The result is shown in Figure 6 below.


One thing to note. The Thumbnail API is part of the Computer Vision API. As of this writing, the free version of the Computer Vision API is limited to 5,000 transactions per month. If you want more than that, you will need to upgrade to the Standard version, which charges $1.50 per 1000 transactions.

But this should be plenty for you to learn this API for free and build and test your applications until you need to put them into production.

The code above can be found on GitHub.

Monday, March 14, 2016 4:01:00 AM (GMT Standard Time, UTC+00:00)
# Sunday, March 13, 2016

Project Oxford is a set of APIs that take advantage of Machine Learning to provide developers with

These technologies require Machine Learning, which requires a lot of computing power and a lot of data. Most of us have neither, but Microsoft does and has used it to create the APIs in Project Oxford.

Project Oxford provides APIs to analyze pictures and voice and provide intelligent information about them.

There are three broad categories of services: Vision, Voice, and Language.

The Vision APIs analyzes pictures and recognizes objects in those pictures.  For example, several Vision APIs are capable of recognizing  faces in an image. One analyzes each face and deduces that person's emotion; another can compare 2 pictures and decide whether or not 2 photographs are the same person; a third guesses the age of each person in a photo.

The Speech APIs can convert speech to text or text to speech. It can also recognize the voice of a given speaker (if you want to use that for authentication in your app, for example) and infer the intent of the speaker from his words and tone.

The Language APIs seem more of a grab bag to me. A spell checker is smart enough to recognize common proper names and homonyms.

All these APIs are currently in Preview but I've played with them and they appear very solid. Many of theme even provide a confidence factor to let you know how confident you should be in the value returned. For example, 2 faces may represent the same person but it helps to know how closely they match.

You can use these APIs. To get started, you need a Project Oxford account, but you can get one for free at projectoxford.ai.

Each API offers a free option that restricts the number and/or frequency of calls, but you can break through that boundary for a charge.

You can also find documentation, sample code, and even a place to try out each API live in your browser at projectoxford.ai.

You call each one by passing and receiving JSON to a RESTful web service, but some of them offer an SDK to make it easier to make that call from a .NET application.

You can see a couple of fun applications of Project Oxford at how-old.net (which guesses the ages of people in photographs) and what-dog.net (which identifies the breed of dog in a photo).

Sign up today and start building apps. It’s fun and it’s free!

Sunday, March 13, 2016 3:14:12 AM (GMT Standard Time, UTC+00:00)
# Friday, March 11, 2016

The auditorium darkened. The music began and a small light appeared at the front of the room; then more. Students on stage danced and waved lanterns on ropes for an impressive musical light show to kick off the 2016 SpartaHack hackathon.


Students came from all over the world to attend this hackathon on the East Lansing campus. Over 200 universities were represented among the applicants. In addition to a number of international students studying on American campus, I met students who traveled to the hackathon from India, Russia, Germany, and the Philippines.

AnnaMattDavidBrian My colleague Brian Sherwin arrived in East Lansing the day before the hackathon to host an Azure workshop for 30 students - showing them how to use the cloud platform to enhance their applications. Ann Lergaard joined us a day later and we did our best to answer student questions and help them build better projects. Late Friday night, I delivered a tech talk showing off some of the services available in Azure.

Microsoft offered a prize for the best hack using our technology. It was won be 2 students who built an application that allowed users to take a photo of text with their iPhone and, in response to voice commands, read back any part of that text. The project combined Microsoft's Project Oxford OCR API with an Amazon Echo and its Alexa platform, an iPhone app, and a Firebase database.

A couple other cool hacks were:

  • ValU, an app that used Microsoft Excel to analyze historical stock price data using Excel VBA scripts.
  • Spartifai, which modified a driver, allowing a Kinect device to be used with a MacBook.

JazzBand A hackathon is an event at which students and others come together and build software and/or hardware projects in small teams over the course of a couple days. I attend a lot of hackathons and SpartaHack was one of the better organized that I've seen. Over 500 students spent the weekend building a wide variety of impressive projects - often with technology they had not touched prior to that weekend. The organizers also did a great job of providing fun activities beyond just hacking. A jazz band and a rock band each performed a set for students to enjoy during a break; a Super Smash Brothers tournament was scheduled; and a Blind Coding Contest challenged students to write code without compiling or testing to see if it would run correctly the first time in front of an audience. 

Snowman As sponsors of the event, we tried to provide some fun as well. We gave away prizes for building a snowman and for tweeting about open source technology. We also provided some loaner hardware for students; and we spent a lot of time mentoring students, which resulted in a lack of sleep this weekend.

The MSU campus has changed a great deal since I earned my undergraduate degree there decades ago. It has even changed since my son graduated from there 4 years ago. But it still felt like a homecoming for me.



Friday, March 11, 2016 4:42:00 PM (GMT Standard Time, UTC+00:00)
# Monday, March 7, 2016
Monday, March 7, 2016 12:52:00 PM (GMT Standard Time, UTC+00:00)
# Sunday, March 6, 2016

Today I am grateful to unexpectedly run into Jody​ and David in Chicago yesterday.

Today I am grateful for new shelves and less clutter in my apartment.

Today I am grateful for a home-cooked dinner last night.

Today I am grateful for dinner last night with Kevin.

Today I am grateful for:
-an overwhelming number of kind messages on Facebook yesterday.
-a birthday lunch with Chris in Grand Rapids yesterday.

Today I am grateful for my first visit to ann arbor, MI since I sold my house, including: -A personal tour of The Forge by Jeeva -Coffee with Velichka -A great crowd to attend my Azure Mobile Apps talk at Mobile Monday.

Today I am grateful to the SpartaHack organizers and hackers who contributed to a successful hackathon this past weekend.

Today I am grateful that this parking ticket was thrown out by the local police.

Today I am grateful for a day in East Lansing, MI and at Michigan State University.

Today I am grateful for a good crowd at our Azure Workshop last night at MSU.

Today I am grateful for the technology that allows me to watch TV and movies when and where I want.

Today I am grateful to Michael, Chris, Chris, and Murali for helping make yesterday's Cloud Camp a success by bringing real world experience to the presentations.

Today I am grateful to be able to ride my bike in February in Chicago.

Today I am grateful to the organizers and participants at HackIllinois who made this weekend's hackathon so successful.

Today I am grateful to Vanessa, who brought me espresso yesterday each time my energy ran low.

Today I am grateful for the hundreds of students who came to my Azure talk last night that I re-wrote yesterday afternoon.

Today I am grateful for this excellent and unexpected moving gift from Betsy

Today I am grateful for unsolicited praise from a co-worker yesterday.

Today I am grateful for an unexpected call from my cousin Kevin yesterday.

Today I am grateful for dinner with Christina last night.

Today I am grateful for a pristine blanket of snow covering downtown Chicago this morning.

Today I am grateful for Nyquil, Dayquil, and a Neti Pot.

Today I am grateful for finally paying off some sleep debt.

Today I am grateful for: -Lunch yesterday with Matt -A chance to present to a group of University of Chicago students

Today I am grateful for: -My first day with a personal trainer at the new gym -Attending Founder Institute graduation

Today I am grateful for my first game at Purdue's Mackey Arena.

Today I am grateful for the help I received yesterday unpacking dozens of boxes from my move.

Today I am grateful for an excellent week in Seattle.

Sunday, March 6, 2016 12:51:12 PM (GMT Standard Time, UTC+00:00)
# Monday, February 29, 2016
Monday, February 29, 2016 5:44:00 PM (GMT Standard Time, UTC+00:00)
# Wednesday, February 24, 2016

semjs-large When I lived in Michigan, I was a regular attendee of the Southeast Michigan JavaScript meetup – a local user group that attracted close to a hundred attendees each month and excellent speakers from all over the country.

One thing I admired about this meetup is their habit of recording meeting presentations. 

Those recordings are now available on Microsoft’s Channel 9 site. You can view dozens of these presentations at https://channel9.msdn.com/Blogs/semjs.

In the past 2 weeks, over 50,000 people have watched these videos on Channel 9.

Below are some of the more popular presentations:

Wednesday, February 24, 2016 9:11:23 PM (GMT Standard Time, UTC+00:00)
# Tuesday, February 23, 2016