# Thursday, March 4, 2021

GCast 106:

Audio Transcription and Captioning with Azure Media Services

Azure Media Services can analyze an audio or video file and transcribe speech into text. You can then take the generated files and provide synchronized captioning for your video.

Thursday, March 4, 2021 8:52:00 AM (GMT Standard Time, UTC+00:00)
# Monday, March 1, 2021

Episode 650

Christos Matskas on Microsoft Identity Platform

Microsoft Identity Platform is a set of authentication service, open-source libraries, and application management tools. Christos Matskas describes these tools and how to use them to make your application more secure.

Links:
https://docs.microsoft.com/en-us/azure/active-directory/develop/
https://docs.microsoft.com/en-us/azure/active-directory/develop/sample-v2-code
https://www.twitch.tv/425show

Monday, March 1, 2021 8:48:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 25, 2021

GCast 105:

Analyzing a Video with Azure Media Services

Learn how to use Azure Media Services to apply Artificial Intelligence and Machine Learning to a video, analyzing such things as face detection, speech-to-text, object detection, and optical character recognition

Thursday, February 25, 2021 8:52:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 18, 2021

GCast 104:

Sharing a Video Online with Azure Media Services

Learn how to use Azure Media Services to share a video on the web for streaming and/or for downloading.

Thursday, February 18, 2021 8:51:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 11, 2021

GCast 103:

Encode a Video with Azure Media Services

Learn how to use Azure Media Services to encode a video into multiple formats, including support for adaptive streaming.

Thursday, February 11, 2021 9:50:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, February 9, 2021

In previous articles, I showed how to use Azure Media Services (AMS) to work with video that you upload. In this article, I will show how to broadcast a live event using AMS.

Before you get started, you will need some streaming software. For the demo in this article, I used Wirecast from Telestream. Telestream offers a free version, which is good for learning and demos but not for production, as it places a watermark on all streaming videos.

You will need to create an Azure Media Services account, as described in this article.

After the Media Service is created, navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1.

ams01-OverviewBlade
Fig. 1

Then, select "Live streaming" from the left menu to open the "Live streaming" blade, as shown in Fig. 2.

ams02-LiveStreamingBlade
Fig. 2

Click the [Add live event] button (Fig. 3) to open the "Create live event dialog", as shown in Fig. 4.

ams03-AddLiveEventButton
Fig. 3

ams04-CreateLiveEvent
Fig. 4

At the "Live event name" field, enter a name for your event.

You may optionally enter a description and change the Encoding, Input protocol, Input ID, or Static hostname prefix.

Check the "I have all the rights..." checkbox to indicate you are not streaming content owned by anyone other than yourself.

Click the [Review + create] button to display the summary page, as shown in Fig. 5.

ams05-CreateLiveEventConfirmation
Fig. 5

If any validation errors display, return to the "Basics" page, and correct them.

Click the [Create] button to create the live event.

When the event is created, you will return to the "Live streaming" blade with your event listed, as shown in Fig. 6.

ams06-LiveStreamingBlade
Fig. 6

Click the event name link to display the event details, as shown in Fig. 7.

ams07-LiveEvent
Fig. 7

Click the [Start] button (Fig. 8) and click [Start] on the confirmation popup (Fig. 9) to start the event.

ams08-StartButton
Fig. 8

ams09-ConfirmStart
Fig. 9

When the event is started, the event details page will show information about the input, as shown in Fig. 10.

ams10-LiveEvent
Fig. 10

The "Input URL" textbox (Fig 11) displays a URL that you will need in your streaming software. Copy this URL and save it somewhere. You will need it in your streaming software.

ams11-InputUrl
Fig. 11

For the next part, you will need some streaming software. I used Wirecast from Telestream. The user interface of the free demo version is shown in Fig. 12.

ams12-Wirecast
Fig. 12

The following steps are specific to Wirecast, but other streaming software will have similar steps.

Click the [+] button on the first layer (Fig. 13) to open the "Add Shot" dialog, as shown in Fig. 14.

ams13-Layer
Fig. 13

ams14-AddLayer
Fig. 14

I chose to share the image captured by my webcam, but you can share screen captures or videos, if you like. The image you are capturing will be set as a "preview". Make this same layer broadcast live by clicking the "Live" button (Fig. 15).

ams15-GoButton
Fig. 15

Now, configure your streaming software to send its live video to your AMS Live Streaming event. Select Output | Output Settings... from the menu to open the Output dialog, as shown in Fig. 16.

ams16-OutputSettings
Fig. 16

Select "RTMP Server" from the "Destination" dropdown and click the [OK] button to open the "Output settings" dialog, as shown in Fig. 17.

ams17-OutputSettings
Fig. 17

In the "Address" text box, paste the Input URL that you copied from the AMS Live Stream event. Click the [OK] button to close the dialog.

Your UI should look similar to the following.

To begin streaming, select Output | Start / Stop Broadcasting | Start All from the menu, as shown in Fig. 18.

ams18-StartOutput
Fig. 18

Your UI should look similar to Fig. 19.

ams19-Wirecast
Fig. 19

Return to the Azure Media Services live event. You should see a preview of what you are broadcasting from your streaming software, as shown in Fig. 20. Refresh the page if you do not see it. There may be a few seconds delay between what is captured and what is displayed.

ams20-LiveEvent
Fig. 20

Click the [+ Create an output] button (Fig. 21) to open the "Create an output" dialog with the "Create output" tab selected, as shown in Fig. 22.

ams21-CreateAnOutput
Fig. 21

ams22-CreateOutputDialog
Fig. 22

Verify the information on this tab; then, click the [Next: Add streaming locator] button to advance to the "Add streaming locator" tab, as shown in Fig. 23.

ams23-CreateOutput
Fig. 23

Verify the information on this tab; then, click the [Create] button to create a streaming locator and endpoint. You will return to the live event blades, as shown in Fig. 24.

ams24-StreamingEndpoint
Fig. 24

Click the [Start streaming endpoint] button, then click the confirmation [Start] button, as shown in Fig. 25.

ams25-StartStreamingEndpoint
Fig. 25

After the streaming endpoint is started, copy the "Streaming URL" textbox contents (Fig. 26). You will need this to create an output page for viewers to watch your live event.

ams26-StreamingUrl
Fig. 26

Create and launch a web page with the HTML in Listing 1.

Listing 1:

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Azure Media Services Demo</title>
    <link href="https://amp.azure.net/libs/amp/2.3.6/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
    <script src="https://amp.azure.net/libs/amp/2.3.6/azuremediaplayer.min.js"></script>
</head>
<body>
    <h1>Video</h1>
    <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" data-setup='{"nativeControlsForTouch": false}'>
        <source src="STREAMING_URL"
                type="application/vnd.ms-sstr+xml" />
    </video>
</body>
</html>
  

where STREAMING_URL is the Streaming URL you copied from the live event textbox above.

Listing 2 shows an example with the URL filled in.

Listing 2:

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Azure Media Services Demo</title>
    <link href="https://amp.azure.net/libs/amp/2.3.6/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
    <script src="https://amp.azure.net/libs/amp/2.3.6/azuremediaplayer.min.js"></script>
</head>
<body>
    <h1>Video</h1>
    <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" data-setup='{"nativeControlsForTouch": false}'>
        <source src="https://dgtestams-usea.streaming.media.azure.net/45fb391c-8e10-4d41-a0ab-a03e50d57afd/cb4a49d9-93ad-4bb1-8894-c3f0a9fb7d43.ism/manifest"
                type="application/vnd.ms-sstr+xml" />
    </video>
</body>
</html>
  

With the live event running, your web page should display something similar to Fig. 27.

ams27-WebPage
Fig. 27

If this is published on the web, viewers will be able to watch your live stream from just about anywhere.

Be sure to stop your live event when you finish broadcasting in order to avoid unnecessary charges.

In this article, I showed you how to create a live streaming event using Azure Media Services.

Tuesday, February 9, 2021 8:03:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 4, 2021

GCast 102:

Video Files and Azure Media Services

Learn the capabilities of Azure Media Services, how to create an Azure Media Services account, and how to add audio and video files as Assets in that account.

Thursday, February 4, 2021 9:45:00 AM (GMT Standard Time, UTC+00:00)
# Wednesday, February 3, 2021

In a previous article, I showed how to embed into a web page a video encoded with Azure Media Services (AMS).

In this article, I will show you how to add captions to that video.

In my last article, I showed you how to perform audio transcription with Azure Media Services using an Audio Transcription Job. Among other things, this generates a transcript.vtt file with speech-to-text data, listing anything spoken in the video, along with the time at which the words were spoken.

You can also generate this data by using the "Video and Audio Analyzer" job, as described in this article.

For this work, the transcript.vtt file must be in the same folder as the video(s) playing on your web page. A simple way to do this is to download the file from its current container and upload it into the Encoded Video container.

Navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1.

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

Select the Output Asset containing Audio Transcription or Analyzer data to display the Asset details page, as shown in Fig. 3.

ams03-AssetDetails
Fig. 3

Click the link next to the "Storage container" label (Fig. 4) to open the Storage Blob container associated with this asset, as shown in Fig. 5. This container should open in a new browser tab.

ams04-StorageContainerLink
Fig. 4

ams05-Container
Fig. 5

Click the "transcript.vtt" row to open the blob blade showing details of the transcript.vtt blob, as shown in Fig. 6.

ams06-VttBlobDetails
Fig. 6

Click the download button (Fig. 7) in the top toolbar and save the transcript.vtt file on your local disc. Note where you save this file.

ams07-DownloadButton
Fig. 7

Listing 1 shows a sample VTT file.

Listing 1

WEBVTT

NOTE duration:"00:00:11.0290000"

NOTE language:en-us

NOTE Confidence: 0.90088177

00:00:00.000 --> 00:00:04.956 
 This video is about Azure Media Services

NOTE Confidence: 0.90088177

00:00:04.956 --> 00:00:11.029 
 and Azure Media Services are. Awesome.
  

Navigate again to the "Assets" blade, as shown in Fig. 8.

ams08-AssetsBlade
Fig. 8

In the row of the Analyzer or Audio Transcription asset, click the link in the "Storage link" column to open the container associated with this asset, as shown in Fig. 9.

ams09-Container
Fig. 9

Click the upload button (Fig. 10) to open the "Upload blob" dialog, as shown in Fig. 11.

ams10-UploadButton
Fig. 10

ams11-UploadBlobDialog
Fig. 11

Click the "Select a file" field to open a file navigation dialog. Navigate to the older where you stored transcript.vtt and select this file. Then, click the [Upload]

When the dialog closes, you should return to the Container blade and transcript.vtt should now be listed, as shown in Fig. 12.

ams12-Container
Fig. 12

Click to open the asset containing the video(s) used to generate the VTT file, as shown in Fig. 13.

ams13-AssetDetails
Fig. 13

Start the Streaming Locator, if it is not already started. If you have not yet created a Streaming Locator, this article walks you through it.

Copy the Streaming URL and save it somewhere. It should begin with "https://" and end with "manifest".

As a reminder, Listing 2 shows the HTML to embed an AMS video in a web page. This is the code shown in this article.

Listing 2:

<!DOCTYPE html> 
< html lang="en"> 
< head> 
    <title>Azure Media Services Demo</title> 
    <link href="https://amp.azure.net/libs/amp/2.3.6/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet"> 
    <script src="https://amp.azure.net/libs/amp/2.3.6/azuremediaplayer.min.js"></script> 
< /head> 
< body> 
    <h1>Video</h1> 
    <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" data-setup='{"nativeControlsForTouch": false}'> 
        <source src="STREAMING_URL_MANIFEST" 
                type="application/vnd.ms-sstr+xml" /> 
default />--> 
    </video> 
< /body> 
< /html>
  

where STREAMING_URL_MANIFEST is replaced with the Streaming URL you copied from the video asset.

To add captions to this video, add a <track> tag inside the <video> tag, as shown in Listing 3:

Listing 3

<!DOCTYPE html> 
< html lang="en"> 
< head> 
    <title>Azure Media Services Demo</title> 
    <link href="https://amp.azure.net/libs/amp/2.3.6/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet"> 
    <script src="https://amp.azure.net/libs/amp/2.3.6/azuremediaplayer.min.js"></script> 
< /head> 
< body> 
    <h1>Video</h1> 
    <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" data-setup='{"nativeControlsForTouch": false}'> 
        <source src="STREAMING_URL_MANIFEST" 
                 type="application/vnd.ms-sstr+xml" /> 
        <track src="VTT_URL" label="english" kind="subtitles" srclang="en-us" default /> 
    </video> 
< /body> 
< /html>
  

where VTT_URL is replaced with a URL consisting of the same domain and folder as in the src attribute of the source tag, but with "transcript.vtt" as the file name.

Listing 4shows an example using an Azure Media Services account that I have since deleted.

Listing 4:

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Azure Media Services Demo</title>
    <link href="https://amp.azure.net/libs/amp/2.3.6/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
    <script src="https://amp.azure.net/libs/amp/2.3.6/azuremediaplayer.min.js"></script>
</head>
<body>
    <h1>Video</h1>
    <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="640" height="400" data-setup='{"nativeControlsForTouch": false}'>
        <source src="https://dgtestblogams-usea.streaming.media.azure.net/232493e2-8c99-41a0-bb09-5a0aea47de35/3b331fca-41e6-458c-8171-235ef3f76875.ism/manifest"
                type="application/vnd.ms-sstr+xml" />
        <track src="https://dgtestblogams-usea.streaming.media.azure.net/29a650b6-5c0a-4932-8efb-2b4bb4a81bf0/transcript.vtt" label="english" kind="subtitles" srclang="en-us" default />
    </video>
</body>
</html>
  

Add this HTML file to any web server and navigate to its URL using a web browser. You should see a page with your video embedded and with captions displaying at the bottom of the video, as shown in Fig. 14.

ams14-VideoWithCaptions
Fig. 14

In this article, I showed you how to include captions in an Azure Media Services video embedded in a web page.

Wednesday, February 3, 2021 9:07:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, February 2, 2021

In a previous article, I showed you how to use Azure Media Services (AMS) to analyze a video. Among other things, this analysis performs audio transcription to perform text to speech on your video. This outputs 2 files with the spoken text in the audio track of your video.

You may want to only do audio transcription. If you are not interested in the other analysis output, it does not make sense to spend the time or compute on analyzing a video for the other features. AMS allows you to perform only Audio Transcription and eschew the other analysis.

Navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1.

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

Select the Input Asset you uploaded to display the Asset details page, as shown in Fig. 3.

ams03-AssetDetails
Fig. 3

Click the [Add job] button (Fig. 4) to display the "Create a job" dialog, as shown in Fig. 5.

ams04-AddJobButton
Fig. 4

ams05-CreateJob
Fig. 5

At the "Transform" field, select the "Create new" radio button.

At the "Transform name" textbox, enter a name to help you identify this Transform.

At the "Description" field, you may optionally enter some text to describe what this transform will do.

At the "Transform type" field, select the "Audio transcription" radio button.

At the "Analysis type" field, select the "Video and audio" radio button.

The "Automatic language detection" section allows you to either specify the audio language or allow AMS to figure this out. If you know the language, select the "No" radio button and select the language from the dropdown list. If you are unsure of the language, select the "Yes" radio button to allow AMS to infer it.

The "Configure Output" section allows you to specify where the generated output assets will be stored.

At the "Output asset name" field, enter a descriptive name for the output asset. AMS will suggest a name, but I prefer the name of the Input Asset, followed by "_AudioTranscription" or something more descriptive.

At the "Asset storage account" dropdown, select the Azure Storage Account in which to save a container and the blob files associated with the output asset.

At the job name, enter a descriptive name for this job. A descriptive name is helpful if you have many jobs running and want to identify this one.

At the "Job priority" dropdown, select the priority in which this job should run. The options are "High", "Low", and "Normal". I generally leave this as "Normal" unless I have a reason to change it. A High priority job will run before a Normal priority job, which will run before a Low priority job.

Click the [Create] button to create the job and queue it to be run.

You can check the status of the job by selecting "Transforms + jobs" from the left menu to open the "Transforms + jobs" blade (Fig. 6) and expanding the job you just created (Fig. 7).

ams06-TransformsJobs
Fig. 6

ams07-ExpandJob
Fig. 7

The state column tells you whether the job is queued, running, or finished.

Click the name of the job to display details about the job, as shown in Fig. 8.

ams08-JobDetails
Fig. 8

After the job finishes, when you return to the "Assets" blade, you will see the new output Asset listed, as shown in Fig. 9.

ams09-AssetsBlade
Fig. 9

Click the name of the asset you just created to display the Asset Details blade, as shown in Fig. 10.

ams10-AudioTranscriptionAssetDetails
Fig. 10

Click the link to the left of "Storage container" to view the files in Blob storage, as shown in Fig. 11.

ams11-Container
Fig. 11

The speech-to-text output can be found in the files transcript.ttml and transcript.vtt. These two files contain the same information - words spoken in the video and times they were spoken - but they are in different standard formats.

Listing 1 shows a sample TTML file for a short video, while Listing 2 shows a VTT file for the same video.

Listing 1:

<?xml version="1.0" encoding="utf-8"?>
 <tt xml:lang="en-US" xmlns="http://www.w3.org/ns/ttml" xmlns:tts="http://www.w3.org/ns/ttml#styling" xmlns:ttm="http://www.w3.org/ns/ttml#metadata">
   <head>
     <metadata>
       <ttm:copyright>Copyright (c) 2013 Microsoft Corporation.  All rights reserved.</ttm:copyright>
     </metadata>
     <styling>
       <style xml:id="Style1" tts:fontFamily="proportionalSansSerif" tts:fontSize="0.8c" tts:textAlign="center" tts:color="white" />
     </styling>
     <layout>
       <region style="Style1" xml:id="CaptionArea" tts:origin="0c 12.6c" tts:extent="32c 2.4c" tts:backgroundColor="rgba(0,0,0,160)" tts:displayAlign="center" tts:padding="0.3c 0.5c" />
     </layout>
   </head>
   <body region="CaptionArea">
     <div>
       <!-- Confidence: 0.90088177 -->
       <p begin="00:00:00.000" end="00:00:07.080">This video is about Azure Media Services and Azure Media</p>

      <!-- Confidence: 0.90088177 -->
       <p begin="00:00:07.206" end="00:00:08.850">Services are.</p>

      <!-- Confidence: 0.935814 -->
       <p begin="00:00:08.850" end="00:00:11.029">Awesome.</p>
     </div>
   </body>
 </tt>
  

Listing 2:

WEBVTT

NOTE duration:"00:00:11.0290000"

NOTE language:en-us

NOTE Confidence: 0.90088177

00:00:00.000 --> 00:00:04.956 
This video is about Azure Media Services

NOTE Confidence: 0.90088177

00:00:04.956 --> 00:00:11.029 
and Azure Media Services are. Awesome.
  
Tuesday, February 2, 2021 9:24:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, January 28, 2021

GCast 101:

Azure Resource Groups

What are the advantages of Azure Resource Groups? How do I create and manage a Resource Group?

Azure | GCast | Screencast | Video
Thursday, January 28, 2021 9:13:00 AM (GMT Standard Time, UTC+00:00)
# Friday, January 22, 2021

In a previous article, I showed you how to upload an asset to an Azure Media Services (AMS) account. In this article, you will learn how to use Azure Media Services to analyze a video.

Navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1.

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

Select the Input Asset you uploaded to display the Asset details page, as shown in Fig. 3.

ams03-AssetDetails
Fig. 3

Click the [Add job] button (Fig. 4) to display the "Create a job" dialog, as shown in Fig. 5.

ams04-AddJobButton
Fig. 4

ams05-CreateJobBlade
Fig. 5

At the "Transform" field, select the "Create new" radio button.

At the "Transform name" textbox, enter a name to help you identify this Transform.

At the "Description" field, you may optionally enter some text to describe what this transform will do.

At the "Transform type" field, select the "Video and audio analyzer" radio button.

At the "Analysis type" field, select the "Video and audio" radio button.

The "Automatic language detection" section allows you to either specify the audio language or allow AMS to figure this out. If you know the language, select the "No" radio button and select the language from the dropdown list. If you are unsure of the language, select the "Yes" radio button to allow AMS to infer it.

The "Configure Output" section allows you to specify where the generated output assets will be stored.

At the "Output asset name" field, enter a descriptive name for the output asset. AMS will suggest a name, but I prefer the name of the Input Asset, followed by "_Analysis" or something more descriptive.

At the "Asset storage account" dropdown, select the Azure Storage Account in which to save a container and the blob files associated with the output asset.

At the job name, enter a descriptive name for this job. A descriptive name is helpful if you have many jobs running and want to identify this one.

At the "Job priority" dropdown, select the priority in which this job should run. The options are "High", "Low", and "Normal". I generally leave this as "Normal" unless I have a reason to change it. A High priority job will run before a Normal priority job, which will run before a Low priority job.

Click the [Create] button to create the job and queue it to be run.

You can check the status of the job by selecting "Transforms + jobs" from the left menu to open the "Transforms + jobs" blade (Fig. 6) and expanding the job you just created (FIg. 7).

ams06-TransformJobs
Fig. 6

ams07-ExpandJob
Fig. 7

The state column tells you whether the job is queued, running, or finished.

Click the name of the job to display details about the job, as shown in Fig. 8.

ams08-JobDetails
Fig. 8

After the job finishes, when you return to the "Assets" blade, you will see the new output Asset listed, as shown in Fig. 9.

ams09-AssetsBlade
Fig. 9

Click on the link in the "Storage link" column to view the files in Blob storage, as shown in Fig. 10.

ams10-Container
Fig. 10

AMS Analytics produces the following text files:

File Name Contents
annotations.json A set of tags identifying objects and actions at various poinst throughout the video
contentmoderation.json Information at time points throughout the video, indicating if the video contains racy and/or adult content and should be reviewed.
emotions.json An analysis of emotions displayed on the faces in the video
faces.json Details of each face detected in the video at various time points
insights.json A file containing information on faces, OCR, and transcriptions at time points throughout the video
lid.json Spoken languages detected at various time points throughout the video
metadata.json Data about the video and audio tracks, such as format and size
ocr.json The text of any words displayed on screen
rollingcredits.json Information about rolling credits displayed, if any
transcript.ttml A transcription of any spoken text in the video, in Timed Text Markup Language (TTML) format
transcript.vtt A transcription of any spoken text in the video, in WebVTT format

In addition, you will find thumbnail images taken from the video as JPG files or as a ZIP file containing multiple JPG files.

In this article, you learned how to use Azure Media Services to analyze an Audio / Video file

Friday, January 22, 2021 9:45:00 AM (GMT Standard Time, UTC+00:00)
# Wednesday, January 20, 2021

In a previous article, I showed you how to use Azure Media Services to generate a Streaming Locator so that others can view and/or download your video.

In this article, I will show you how to create a web page that allows users to select the format and resolution in which they want to view your video. 

Navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

Select the Output Asset created by encoding your input video Asset to display the Asset details page, as shown in Fig. 3.

ams03-AdaptiveStreamingAsset
Fig. 3

Verify that the Streaming Locator exists and is running. Start it, if necessary.

Click the "View locator" link to display the "Streaming URLs" dialog, as shown in Fig. 4.

ams04-StreamingUrlsBlade
Fig. 4

Scroll down to the "SmoothStreaming" section shown in Fig. 5.

ams05-SmoothStreaming
Fig. 5

The SmoothStreaming URL points to a file named "manifest", which is an XML document with information on available encoded videos in this asset. A sample of such a document is in Listing 1.

Listing 1:

<?xml version="1.0" encoding="UTF-8"?> 
< SmoothStreamingMedia MajorVersion="2" MinorVersion="2" Duration="110720000" TimeScale="10000000"> 
    < StreamIndex Chunks="2" Type="audio" Url="QualityLevels({bitrate})/Fragments(aac_und_2_127999_2_1={start time})" QualityLevels="1" Language="und" Name="aac_und_2_127999_2_1"> 
        < QualityLevel AudioTag="255" Index="0" BitsPerSample="16" Bitrate="127999" FourCC="AACL" CodecPrivateData="1190" Channels="2" PacketSize="4" SamplingRate="48000" /> 
        <c t="0" d="60160000" /> 
        <c d="50560000" /> 
    </StreamIndex> 
    < StreamIndex Chunks="2" Type="video" Url="QualityLevels({bitrate})/Fragments(video={start time})" QualityLevels="4"> 
        < QualityLevel Index="0" Bitrate="2478258" FourCC="H264" MaxWidth="1024" MaxHeight="576" CodecPrivateData="000000016764001FACD94040049B0110000003001000000303C0F18319600000000168EBECB22C" /> 
        < QualityLevel Index="1" Bitrate="1154277" FourCC="H264" MaxWidth="640" MaxHeight="360" CodecPrivateData="000000016764001EACD940A02FF970110000030001000003003C0F162D960000000168EBECB22C" /> 
        < QualityLevel Index="2" Bitrate="731219" FourCC="H264" MaxWidth="480" MaxHeight="270" CodecPrivateData="0000000167640015ACD941E08FEB0110000003001000000303C0F162D9600000000168EBECB22C" /> 
        < QualityLevel Index="3" Bitrate="387314" FourCC="H264" MaxWidth="320" MaxHeight="180" CodecPrivateData="000000016764000DACD941419F9F0110000003001000000303C0F14299600000000168EBECB22C" /> 
        <c t="0" d="60000000" /> 
        <c d="50333333" /> 
    </StreamIndex> 
< /SmoothStreamingMedia>
  

Notice there are two <StreamIndex> tags: One for the audio and one for the video. The StreamIndex audio tag has only one <QualityLevel> child tag, indicating that there is only one audio option. The StreamIndex video tag has four <QualityLevel> child tags, indicating that there are four video options - each wiht a different size and bitrate.

We can add the SmoothStreaming manifest URL to an HTML <video> tag, as shown in Listing 2.

Listing 2:

                    <video 
                           id="vid1" 
                           class="azuremediaplayer amp-default-skin" 
                           autoplay 
                            controls 
                           width="848" 
                            height="480" 
                           data-setup='{"nativeControlsForTouch": false}'> 
                         <source 
                                src="https://dgtestams-usea.streaming.media.azure.net/77ec142c-e655-41a2-8ddb-a3e46168751a/WIN_20201215_14_28_08_Pro.ism/manifest" 
                                 type="application/vnd.ms-sstr+xml" /> 
                     </video>
  

A full web page is shown in Listing 3:

Listing 3:

<html>
    <head>
        <link href="https://amp.azure.net/libs/amp/latest/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet" />
        <script src="https://amp.azure.net/libs/amp/latest/azuremediaplayer.min.js"></script>
    </head>
    <body>
        <video id="vid1" class="azuremediaplayer amp-default-skin" autoplay controls width="848" height="480" data-setup='{"nativeControlsForTouch": false}'>
            <source src="https://dgtestams-usea.streaming.media.azure.net/77ec142c-e655-41a2-8ddb-a3e46168751a/WIN_20201215_14_28_08_Pro.ism/manifest" type="application/vnd.ms-sstr+xml" />
        </video>
    </body>
</html>
  
  

Fig. 6 shows the output of Listing 3 when viewed in a browser.

ams06-VideoTagInBrowser
Fig. 6

As you can see, clicking the "Quality" icon at the bottom right of the player allows the viewer to select the quality of the video. This is helpful if the user is on a lower bandwidth.

Note that you are charged extra when the Streaming Locator is running, so it is important to stop the Locator if you do not need it.

In this article, you learned how to use the SmoothStreaming URL to add your video to a web page.

Wednesday, January 20, 2021 8:03:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, January 19, 2021

In a previous article, I showed you how to use Azure Media Services to encode a video.

In this article, I will show you how to generate a URL, allowing others to view your encoded video online.

Navigate to the Azure Portal and to your Azure Media Services account, as shown in Fig. 1

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

Select the Output Asset created by encoding your input video Asset to display the Asset details page, as shown in Fig. 3.

ams03-AdaptiveStreamingAsset
Fig. 3

Click the [New streaming locator] button (Fig. 4) to display the "Add streaming locator" dialog, as shown in Fig. 5.

ams04-NewStreamingLocatorButton
Fig. 4

ams05-AddStreamingLocatorBlade
Fig. 5

At the "Name" field, enter a descriptive name for this locator.

At the "Streaming policy" dropdown, select a desired Streaming Policy. A Streaming Policy define streaming protocols and encryption options. There are options to allow for streaming online or downloading and for adding encryption and Digital Rights Management. For this demo, I have selected "Predefined_DownloadAndClearStreaming". This allows users to view the video online and to download it; and it adds no encryption or DRM.

The flowchart in Fig. 7 is from the Microsoft Azure Documentation and will help you decide which Streaming policy is right for you.

ams06-StreamingPolicyFlowchart
Fig. 6

The Streaming Endpoint will not last forever. You must include an expiration date and time. By default, this is set to 100 years in the future. Change this if you want your video to be accessible for a shorter period of time.

Click the [Add] button to add the Streaming Locator.

The dialog will close and return you to the output Assets details page with the Streaming URL filled in, as shown in Fig. 7.

ams07-StartStreamingEndpoint
Fig. 7

The URL is still not accessible until you start the streaming endpoint. Click the [Start streaming endpoint] button and the [Start] button in the popup dialog to enable the endpoint URL, as shown in Fig. 8.

ams08-ConfirmStartStreamingEndpoint
Fig. 8

After doing this, the Streaming locator will show as "STREAMING" on the output Asset details page, as shown in Fig. 9. When the Streaming endpoint is started, you can preview the video from within the Asset details page.

ams09-ViewLocator
Fig. 9

Click the "View locator" link to display the "Streaming URLs" dialog. Select the "Streaming endpoint" from the dropdown, as shown in Fig. 10.

ams10-StreamingUrlsBlade
Fig. 10

Many URLs are displayed in this dialog. For the encoding I selected, the "Download" section contains a JPG URL and several MP4 URLs, as shown in Fig. 11.

ams11-Downloads
Fig. 11

You can paste the JPG URL into a browser to view a thumbnail image from the video. You can paste any of the MP4 URLs into a browser to view or download the full video. The MP4 URLs may differ by their resolution and encoding. Clues about which is which can be found in the name of each MP4. The JSON URLs provide metadata about the video and can be used in applications. For example,
https://dgtestams-usea.streaming.media.azure.net/77ec142c-e655-41a2-8ddb-a3e46168751a/WIN_20201215_14_28_08_Pro_1024x576_AACAudio_2478.mp4
contains a video that is 1024 pixels wide by 576 pixels high and was encoded with Advanced Audio Coding (AAC).

Try each of these in your browser to see the differences.

Note that you are charged extra when the Streaming Locator is running, so it is important to stop the Locator if you do not need it.

In this article, you learned how to create and start a Streaming Locator to make your Azure Media Services video available online.

Tuesday, January 19, 2021 9:46:00 AM (GMT Standard Time, UTC+00:00)
# Friday, January 15, 2021

In the last article, I showed you how to add Assets to an Azure Media Services (AMS) account. An Asset can point to an audio or video file, but you will want to encode that file to allow others to consume it. There are many encoding options. By encoding an audio or video, we can convert it into a format that can be consumed by others.

Those who are consuming your media are not all using the same systems. They may have different devices, different clients, different connection speeds, and different software installed. You will want to consider the capabilities and configurations of your users when you decide how to encode your media. Fortunately, Azure Media Services gives you many options.

We use an AMS Job to encode media. A job accepts an input Asset and produces an output Asset. That output Asset may consist of one or more files stored in a single Azure Storage Blob Container.

To begin, encoding, navigate to the Azure Portal and open an Azure Media Service, as shown in Fig. 1.

ams01-OverviewBlade
Fig. 1

Then, select "Assets" from the left menu to open the "Assets" blade, as shown in Fig. 2.

ams02-AssetsBlade
Fig. 2

See the following articles if you need help creating an AMS account or an AMS Asset.

Click the [Add job] button (Fig. 3) to display the "Create a job" dialog, as shown in Fig. 4.

ams03-AddJobButton
Fig. 3

ams04-CreateJobBlade
Fig. 4

The first thing you need to do is create or select a Transform. A Transform is a recipe for doing something, like encoding a video. It is used by a Job, which tells Azure to execute the steps in a Transform. I will assume this is your first time doing this and do not have any Transforms created, so you will need to create a new one; but in the future, you may choose to re-use an existing Transform on a new Job.

At the "Transform" radio button, select "Create new".

At the "Transform name" textbox, enter a name to help you identify this Transform.

At the "Description" field, you may optionally enter some text to describe what this transform will do.

At the "Transform type" field, select the "Encoding" radio button.

At the "Built-in preset name" dropdown, you can select a desired encoding output appropriate for your audience. For this demo, select "Adaptive Streaming". This will create files in multiple formats that can be consumed by a variety of clients.

Next, we configure the settings for the output asset.

At the "Output asset name", enter a name to help you identify the output Asset that will be created. Azure will supply a default name, but I prefer to use something more readable, such as the Input Asset name, followed by the type of Transform.

At the "Asset storage account" dropdown, select the storage account in which to save the container and files associated with this asset.

At the "Job name" field, enter a name for this job to help you identify it later.

At the "Job priority" dropdown, select "Normal", "High", or "Low" priority, depending on whether you want this job to take precedence over other jobs. Unless I have a compelling reason, I leave this as the default "Normal".

Click the [Create] button to create and queue up the job.

You can check the progress of the job by selectin "Transforms + jobs" in the left menu to display the "Transforms + jobs" blade, as shown in Fig. 5.

ams05-TransformJobsBlade
Fig. 5

Find the row with your Transform name (This why it is important to give it an easily identifiable name). Expand to see Jobs using this Transform, as shown in Fig. 6.

ams06-TransformAndJobsBlade
Fig. 6

The state column tells you whether the job is queued, running, or finished.

From the "Transform + jobs" blade, you can click the name of the Transform to display more details about the Transform, as shown in Fig. 7 or click the name of the Job to display details about the job, as shown in Fig. 8.

ams07-TransformDetailsBlade
Fig. 7

ams08-JobDetailsBlade
Fig. 8

After the job finishes, when you return to the "Assets" blade, you will see the new output Asset listed, as shown in Fig. 9.

ams09-AssetsBlade
Fig. 9

Click on the link in the "Storage link" column to view the files in Blob storage, as shown in Fig. 10.

ams10-Container
Fig. 10

Note that there are multiple MP4 files, each with a different resolution. The names of the file indicate the resolution of the video. This allows users with smaller screens or slower bandwidth to select the optimum resolution for viewing.

The container also contains a thumbnail image and several text files with information describing the videos that client players can use.

In this article, you learned how to use Azure Media Services to encode a video. In the next article, I will show you how to share that video with others.

Friday, January 15, 2021 9:39:00 AM (GMT Standard Time, UTC+00:00)
# Wednesday, January 13, 2021

In this last article, I introduced Azure Media Services and showed how to create an Azure Media Services (AMS) account.

In this article, I will show you how to add video and/or audio assets to an Azure Media Services account. This is often the first step in sharing media online.

An Asset points to an Azure Storage Blob Container containing one or more files. These files contain either media or metadata about media. We distinguish between Input Assets (assets provided to AMS via a user or other external source) and Output Assets (assets produced by AMS jobs). Fig. 1 illustrates this relationship.

ams01-AssetsContainer
Fig. 1

Let's look at how to upload a video file from your local computer as an Asset, as illustrated in Fig. 2.

ams02-PublishDiagram
Fig. 2

Open the Azure Portal and navigate to the Azure Media Services account, as shown in Fig. 3.

ams03-OverviewBlade
Fig. 3

Select "Assets" in the left menu to open the "Assets" blade, as shown in Fig. 4.

ams04-AssetsBlade
Fig. 4

Click the [Upload] button (Fig. 5) to open the "Upload new assets" dialog, as shown in Fig. 6.

ams05-UploadButton
Fig. 5

ams06-UploadNewAsset
Fig. 6

At the "Storage account" dropdown, select the storage account in which you want to store the media file.

Click the "Upload files" icon and select the video file or files you want to upload.

More fields display for each file selected, as shown in Fig. 7.

ams07-UploadNewAsset-Completed
Fig. 7

Enter a name for each asset; then, click the [I agree and upload] button to begin uploading your video.

When the upload is complete, the asset will be listed, as shown in Fig. 8.

ams08-AssetsBladeWithAsset
Fig. 8

Click the link in the "Storage link" column to view the Storage Blob container and files associated with this asset, as shown in Fig. 9.

ams09-Container
Fig. 9

In this article, you learned how to upload a video file to create an Azure Media Services asset. You will want to encode this in order that others can view it. I will show how to encode in the next article.

Wednesday, January 13, 2021 9:01:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, January 12, 2021

Streaming video online is an effective way to communicate to large numbers of people.

But there are challenges. You need to get your video online in a format that is accessible to others and make it available to your audience.

You may also want to provide closed captioning for hearing impaired users; analyze the contents of your audio and video; reduce latency with a Content Delivery Network; and secure your media appropriately.

Azure Media Services provides all these capabilities and does in a highly scalable, fault-tolerant way.

The first step in using Azure Media Services is to create an Azure Media Services Account. As with most, services in Azure, you can create an Azure Media Services Account in the Azure Portal by clicking the [Create a resource] button (Fig. 1); then search for and select "Media Services", as shown in Fig. 2.

ams01-CreateAResourceButton
Fig. 1

ams02-New
Fig. 2

The "Create media service account" dialog displays, as shown in Fig. 3.

ams03-CreateMediaServiceAccount
Fig. 3

At the "Subscription" dropdown, select the Subscription that will contain this Media Service Account. Most of you will have only one subscription.

At the "Resource group" field, select a Resource Group to contain this account or click the "Create new" link to create a new Resource Group to contain it. A Resource Group is a logical grouping of Azure resources, making it easier for you to manage them together.

At the "Media Services account name" field, enter a unique name for this account. This name must be between 3 and 24 characters in length and can contain only numbers and lowercase letters.

At the "Location" dropdown, select a location in which to store this service. When selecting a location, consider the location of your users and any potential legal issues.

At the "Storage Account" field, select an existing storage account from the dropdown or click the "Create a new storage account" link to create a new storage account. This storage account will hold all the assets for your service, including audio files, video files, and metadata files. Unless I have media files that already exist, I tend to prefer to keep all my Azure Media Services assets in their own storage account.

Click the [Review + create] button to display the summary page, as shown in Fig. 4.

ams04-Review
Fig. 4

Check the "I have all the rights to use the content/file" checkbox and click the [Create] button to begin creating you Azure Media Services Account.

When the service account is complete, the confirmation shown in Fig. 5 displays.

ams05-DeploymentIsComplete
Fig. 5

Click the [Go to resource] button to navigate to the "Overview" blade of the Media Service account, as shown in Fig. 6.

ams06-OverviewBlade
Fig. 6

In this article, you learned the advantages of Azure Media Services and how to create an Azure Media Services account. In the next article, I will show you how to add media assets to this account.

Tuesday, January 12, 2021 9:57:00 AM (GMT Standard Time, UTC+00:00)
# Monday, January 11, 2021

Episode 643

Mike Benkovich on GitHub Actions and Visual Studio

Mike Benkovich on GitHub Actions and Visual Studio Mike Benkovich describes and demonstrates GitHub Actions and the new features of Visual Studio that allow you to create an Action from within the IDE.

http://benkotips.com/
Monday, January 11, 2021 9:46:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, January 7, 2021

GCast 98:

Using the Azure Storage Explorer

The Azure Storage Explorer provides a simple way to access objects in an Azure Storage Account. This video walks you through how to install and use this tool.

Thursday, January 7, 2021 9:03:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, January 5, 2021

An Azure Resource Group (RG) is a logical grouping of resources or assets within an Azure subscription. This helps you organizing related resources - You can open an RG to see a web app, its associated App Service Plan, and the database that it accesses listed - to remind you that these things are related.

But there are more tangible benefits to Resource Groups.

For example, I create a lot of Azure demos for presentations that I deliver in-person, online, or as part of my GCast show. https://aka.ms/gcast

When I create a demo, I place all assets in the same resource group, which makes it easier to delete all these demo resources when the presentation ends.

Another advantage is the ability to create an ARM for all resources in a Resource Group with a few mouse clicks. This allows you to easily automate the deployment of these resources to a new environment using PowerShell or the Azure CLI. With an ARM, resources are created in the correct order and input parameters allow you to change things like the names and locations of these resources.

Azure also gives you the ability to move everything in a Resource Group from one subscription to another.

Finally, Azure allows you to merge two resource groups.

You can create a new Azure Resource Group in the Azure Portal (either by itself or as part of a resource that will be added to the group); via a REST API; via the Azure CLI; or using Azure PowerShell.

When deciding how to organize your Azure assets, consider keeping together related resources by placing them in the same Resource Group. Also, consider creating a new Resource Group for each of your deployment environments, such as Development, Testing, and Production.

Tuesday, January 5, 2021 9:47:00 AM (GMT Standard Time, UTC+00:00)
# Monday, December 14, 2020

Episode 639

Kyle Bunting and Joel Hulen on Data Engineering in Azure

Kyle Bunting and Joel Hulen of Solliance describe Data Engineering and some of tools, such as Azure Synapse Analytics, that allow you to perform Data Engineering at scale in the cloud.

Links:

https://www.solliance.net/

Monday, December 14, 2020 10:15:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, November 24, 2020

What is MinIO?


MinIO is an object storage system, similar to Amazon S3 or Azure Blob storage. It is built on top of Docker containers, which makes it easy to scale.

In a previous article, I showed you how to create and use a MinIO Server.

In this article, I will show how to create and use a MinIO Gateway for Azure Blob Storage.

MinIO Gateway

A MinIO Server stores files and objects. By contrast, a MinIO points to some other storage repository where the files are stored. However, it allows you to interact with those files as if they were stored in MinIO.

Prerequisites

Before you begin, you will need to install Docker Desktop, which you can download for either Windows or Mac.

You will also need an Azure Storage Account. This article explains how to create an Azure Storage Account.

Azure Blob Storage

You will need two pieces of information from your Azure Storage Account: the name of the storage account and the access key.

In the Azure Portal (https://portal.azure.com), you can find the storage account name at the top of the Resource page, as shown in Fig. 1.

mga01-StorageAccountName
Fig. 1

You can find the key on the "Access Keys" blade, as shown in Fig. 2.

mga02-StorageAccountKeys
Fig. 2

Note that there are two keys. Either one will work. Click the [Show Keys] button to view the keys and allow copying to your clipboard.

Creating a MinIO Gateway

A MinIO Gateway for Azure is created with the following command:

docker run -p 9000:9000 --name azure-s3 -e "MINIO_ACCESS_KEY=azurestorageaccountname" -e "MINIO_SECRET_KEY=azurestorageaccountkey" minio/minio gateway azure

where

azurestorageaccountname is the name of the Azure storage account and azurestorageaccountkey is an account key from that same storage account.

You can now log into the MinIO Gateway by opening a browser and navigating to http://127.0.0.1:9000/.

When prompted for your login credentials (Fig. 3), enter the storage account name in the "Access key" field and enter the storage account key in the "Secret Key" field.

mga03-Login
Fig. 3

After a successful login, the MinIO Gateway user interface will display, as shown in Fig. 4.

mga04-MinIO

Fig. 4

Note that this looks exactly like the MinIO Server user interface, described in this article.

In fact, you can create buckets and manage files in a MinIO Gateway exactly as you would in a MinIO server. The only difference is that the objects you manipulate are stored in the corresponding Azure Blob storage, rather than in MinIO. Each bucket is mapped to a Blob Storage container and each file is mapped to a blob.

Conclusion

In this article, you learned how to create a MinIO Gateway for Azure.

Tuesday, November 24, 2020 9:31:00 AM (GMT Standard Time, UTC+00:00)
# Monday, November 23, 2020

Episode 636

Omkar Naik on Microsoft Cloud for Health Care

Microsoft Cloud Solution Architect Omkar Naik describes what Microsoft is doing for health care solutions with Azure, Dynamics, Office 365, and other tools and services.

Links:
http://aka.ms/smarterhealth
http://aka.ms/microsoftcloudforhealthcare
http://aka.ms/azure

Monday, November 23, 2020 9:15:00 AM (GMT Standard Time, UTC+00:00)
# Monday, November 16, 2020

Episode 635

Rik Hepworth on Azure Governance

Many of the issues around cloud computing have nothing to do with writing code. Asking questions early about expected costs, geographic issues, and technologies to choose can save headaches later.

Rik Hepworth describes this governance - the rules by which we operate the cloud - and how we can better prepare to develop for the cloud.

Links:

http://aka.ms/governancedocs
http://aka.ms/GovernanceDocs
https://github.com/Azure/azure-policy

Monday, November 16, 2020 10:18:00 AM (GMT Standard Time, UTC+00:00)
# Monday, October 26, 2020

Episode 632

Magnus Martensson on the Cloud Adoption Framework

Magnus Martensson on the Cloud Adoption Framework Magnus Martensson describes the Cloud Adoption Framework - a collective set of guidance from Microsoft - and how you can use it to migrate or create applications in the cloud.

https://docs.microsoft.com/en-us/azure/cloud-adoption-framework

Monday, October 26, 2020 8:13:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, September 24, 2020

GCast 95:

Creating a MinIO Agent for Azure Blob Storage

Learn how to use MinIO to manage blobs in an Azure Storage Account

Thursday, September 24, 2020 12:25:40 PM (GMT Daylight Time, UTC+01:00)
# Monday, September 7, 2020

Episode 625

Peter de Tender on Azure Certification

Azure trainer Peter de Tender talks about what it takes to acheive Azure certification.

Links:

https://microsoft.com/learn
https://www.007ffflearning.com
https://twitter.com/pdtit

Monday, September 7, 2020 1:04:09 PM (GMT Daylight Time, UTC+01:00)
# Monday, August 10, 2020

Episode 621

Donovan Brown on App Innovations

App Innovations is a concept in new and existing applications are designed to take advantage of what the cloud offers. Donovan Brown talks about some of these advantages and decisions around this strategy.

Links:

https://www.donovanbrown.com/

Monday, August 10, 2020 8:04:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, June 4, 2020

GCast 87:

Logging to Azure Application Insights from a Java Spring Boot Application

With a few configuration settings, you can push your logs from a Java Spring Boot application into Azure Application Insights - even if the app is not running in Azure!

Azure | GCast | Java | Screencast | Video
Thursday, June 4, 2020 3:54:21 PM (GMT Daylight Time, UTC+01:00)
# Monday, March 16, 2020

Episode 602

Jaidev Kunjur on Azure Integration Tools

Jaidev Kunjur of Enkay Technology Solutions discusses some of the integration tools available in Microsoft Azure, such as Logic Apps, API Management, Azure Functions, and Event Grid.

He describes the capabilities of these tools and how his company is using them to solve integration problems for their customers.

https://enkaytech.com/
Monday, March 16, 2020 9:44:52 AM (GMT Standard Time, UTC+00:00)
# Thursday, March 12, 2020

GCast 77:

Connecting Azure Synapse to External Data

Azure Data Warehouse has been re-branded as Azure Synapse. Learn how to add data from an external system to an Azure Synapse database.

Thursday, March 12, 2020 10:07:09 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 27, 2020

GCast 75:

Creating an Azure SQL Server Logical Server

How to create a logical SQL Server in Microsoft Azure.

Thursday, February 27, 2020 9:19:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 20, 2020

GCast 74:

Continuous Deployment with Azure DevOps

Implement continuous integration and continuous deployment by automatically triggering build and deploy pipelines when code is committed to a repository branch.

ALM | Azure | DevOps | GCast | Screencast | Video
Thursday, February 20, 2020 8:17:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, February 18, 2020

Azure Logic Apps allow you to create scalable workflows hosted in the cloud. Although each Logic App is self-contained, it is often helpful to share artifacts such as maps, schemas, and certificates among multiple Logic Apps.

An Azure Integration Account provides a container for storing these artifacts.

Creating an Integration Account

To create an Integration Account, navigate to the Azure Portal [https://portal.azure.com], sign in and click the [Create a resource] button (Fig. 1).

IA01-CreateResourceButton
Fig. 1

Search for and select "Integration Account", as shown in Fig. 2.

IA02-FindIntegrationAccount
Fig. 2

The "Integration Account" information page displays, as shown in Fig. 3.

IA03-IntegrationAccountIntro
Fig. 3

Click the [Create] button to display the "Create Integration Account" blade with the "Basics" tab selected, as shown in Fig. 4.

IA04-IntegrationAccountBasics
Fig. 4

At the "Resource group" field, select or create a resource group in which to store this Integration Account.

At the "Integration account name" field, enter a unique name for this Integration Account.

At the "Location" dropdown, select the region in which to store this account. This should be the same region in which your Logic Apps are located.

At the "Pricing Tier" dropdown, select which pricing tier you wish to use. The options are (in increasing order of cost) "Free", "Basic", and "Standard". Only one Free Account is allowed per Azure Subscription. You can change this setting after creating an Integration Account.

Click the [Review + create] button when you have completed this tab. The "Review + create" tab displays, as shown in Fig. 5.

IA05-ReviewCreate
Fig. 5

Review your choices. Switch back to the "Basics" tab if you need to make any corrections. Click the [Create] button to create the Integration Account.

After Azure creates the Integration Account, the "Your deployment is complete" message displays, as shown inf Fig. 6.

IA06-YourDeploymentIsComplete
Fig. 6

Click the [Go to resource] button to open the Integration Account, as shown in Fig. 7.

IA07-IntegrationAccountOverview
Fig. 7

Here you can manage reusable Schemas, Maps, and other Components.

Associating a Logic App with an Integration Account

An Integration Account can be associated with one or more Logic Apps, making components in the Integration Account available to each of these Logic Apps.

See this article to learn how to create a Logic App.

To associate a Logic App with an Integration Account, open the Logic App. The left menu of a Logic App is shown in Fig. 8.

IA08-LogicAppsLeftMenu
Fig. 8

Click the [Workflow settings] button (Fig. 9) under the "Settings" section of the left menu to open "Access control configuration" blade, as shown in Fig. 10.

IA09-WorkflowSettingsButton
Fig. 9

IA10-WorkflowSettings
Fig. 10

At the "Integration account" dropdown, select the Integration Account you wish to associate with this logic app, as shown in Fig. 11.

IA11-SelectIntegrationAccount
Fig. 11

Click the [Save] button (Fig. 12) at the top of the blade to save this configuration.

IA12-SaveButton
Fig. 12

You are now ready to use the artifacts in the Integration Account with this Logic App.

Tuesday, February 18, 2020 9:26:00 AM (GMT Standard Time, UTC+00:00)
# Friday, February 14, 2020

With a Logic App, you can create and run scalable workflows that are hosted in the Azure cloud. A graphical designer and connectors to hundreds of databases, APIs, and external applications and services make it possible to quickly create a workflow using Logic Apps.

To create a new Logic App, navigate to the Azure Portal, sign in and click the [Create a resource] button (Fig. 1).

LA01-CreateAResource
Fig. 1

Select Integration | Logic App, as shown in Fig. 2.

LA01-NewIntegrationLogicApps
Fig. 2

The "Create Logic App" blade with the "Basics" tab selected, as shown in Fig. 3.

LA03-NewLogicAppBlade
Fig. 3

At the "Resource group" field, select or create a resource group in which to store this Logic App.

At the "Logic App name" field, enter a unique name for this Logic App.

At the "Location" dropdown, select the region in which to store this Logic App.

Click the [Review + create] button when you have completed this tab. The "Review + create" tab displays, as shown in Fig. 4.

LA04-ReviewCreateTab
Fig. 4

Review your choices. Switch back to the "Basics" tab if you need to make any corrections. Click the [Create] button to create the Integration Account.

After Azure creates the Logic App, the "Your deployment is complete" message displays, as shown in Fig. 5.

LA04-ReviewCreateTab
Fig. 5

Click the [Go to resource] button to open the Start Page of your Logic App, as shown in Fig. 6.

LA06-LogicAppStart
Fig. 6

On the start page, a video is available if you want a brief introduction to Logic Apps.

Below the video is a set of buttons that allow you to create a new workflow with a common trigger. A trigger is an action that starts a Logic App workflow.

The next section lists buttons for templates to perform some common tasks. Each template contains a trigger and one or more actions.

These buttons help to accelerate your development by providing some of the activities in a workflow and allowing you to fill in the specific properties.

Fig. 7 shows the Logic App designer after you select the HTTP Request-Response template.

LA07-LogicAppsDesigner
Fig. 7

In this article, you learned how to create a new Logic App.

Friday, February 14, 2020 9:20:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, February 13, 2020

GCast 73:

Build an Azure DevOps Release Pipeline

How to create a pipeline that will automatically deploy an ASP.NET Core Web Application to an Azure App Service.

ALM | Azure | DevOps | GCast | Screencast | Video
Thursday, February 13, 2020 8:14:00 AM (GMT Standard Time, UTC+00:00)
# Wednesday, February 12, 2020

Microsoft Power Automate (formerly Microsoft Flow) and Azure Logic Apps solve similar problems. You can use either one to create a workflow hosted in the cloud. However, Logic Apps tend to be more powerful. If you create a Power Automate flow, you may eventually run into its limitations and wish to recreate the same workflow in Logic Apps.

Fortunately, this can be done in just a few steps.

Navigate to https://flow.microsoft.com, sign in and select the flow you wish to migrate under "My Flows". The details of the flow displays, as shown in Fig. 1.

PALA01-FlowDetails
Fig. 1

From the top toolbar, select Export | Logic Apps template (.json), as shown in Fig. 2.

PALA02-ExportButton
Fig. 2

A JSON file will be created and automatically downloaded. Fig 3 shows a section of this file.

PALA03-JSON
Fig. 3

Navigate to the Azure Portal (https://portal.azure.com) and login. Click the [Create a Resource] button (Fig. 4) and search for "Template deployment (deploy using custom templates), as shown in Fig. 5.

PALA04-CreateResourceButton
Fig 4

PALA05-NewTemplateDeploy
Fig. 5

Select "Template deployment" to display the info page for this service shown in Fig. 6.

PALA06-TemplateDeployInfo
Fig. 6

Click the [Create] button to display the "Custom deployment" start page, as shown in Fig. 7.

PALA07-TemplateDeployStart
Fig. 7

Click "Build your own template in the editor" to display the template editor, as shown in Fig. 8.

PALA08-TemplateEditor
Fig. 8

Click the [Load file] button (Fig. 9); then navigate to and select the JSON file you exported above.

PALA09-LoadFileButton
Fig. 9

The exported JSON displays, as shown in Fig. 10.

PALA10-LoadedFile
Fig. 10

Click the [Save] button to open the "Custom deployment" dialog with settings from the JSON file, as shown in Fig. 11.

PALA11-CustomDeployment
Fig. 11

Fill in the desired Resource Group and Logic App name, check the "I agree" checkbox, and click the [Purchase] button to create the Logic App.

After a few seconds, your logic app will be available. At this time, you can open it and change any settings you wish, as shown in Fig. 12.

PALA12-LogicApp
Fig. 12

It may be necessary to authenticate against any API connectors, so check these before testing your Logic App.

In this article, you learned how to export a Power Automate flow and import it into an Azure Logic App.

Wednesday, February 12, 2020 9:10:00 AM (GMT Standard Time, UTC+00:00)
# Tuesday, February 11, 2020

Azure Logic Apps and Microsoft Power Automate (formerly Microsoft Flow) are tools from Microsoft that allow users to build custom workflows.

Each of these tools provides a robust workflow engine with a graphical front-end. Power Automate (PA) is built on top of Logic Apps and it is possible to export from a Power Automate flow and import it into a Logic App. Each provides a graphical interface to add connectors, workflow step, and control logic. Each supports an in-browser User Interface, so you don't need to install anything locally (although a VS extension lets you design workflows from within Visual Studio.) Each ships with a set of connectors to common databases, queues, APIs, and other systems, along with generic connectors to do things like calling a web service. Neither provides a great DevOps story, allowing easy integration with version control, automated testing, and automated deployment.

But there are differences. A primary difference is the way that Microsoft positions these 2 technologies: Microsoft is targeting PA at "Citizen Developers" - users with a strong knowledge of their systems and their business requirements, but without the knowledge or desire to write code. Logic Apps are targeted at developers and IT workers. As these products mature, expect PA to get more features around ease of use, while Logic App gets more focus on increased power.

Here are some other differences:

Logic Apps:

  • are hosted in Azure
  • are more scalable
  • have a code view, making it slightly easier to use source control
  • have more connectors (e.g., Liquid templates, SAP, IoT)
  • Support B2B and B2C Scenarios
  • triggers fire faster
  • have better monitoring

Power Automate flow:

  • are hosted in Office 365
  • includes a "Button" trigger for easy integration with PowerApps
  • provides some simple, common templates to get you started
  • has better SharePoint integration

When deciding between these tools, here are some questions to ask yourself:

  • Are you primarily using Azure or Office 365? Logic Apps runs in Azure; PA runs in Office 365. If you are not currently using the appropriate platform, you will need to start doing so.
  • What is the tech level of those who will be maintaining your workflows? Logic Apps are designed for tech professionals; PA is designed for business users with some tech knowledge
  • What are the scalability and performance requirements? PA can handle a lot, but the maximums are greater for Logic Apps.
  • Do many of your workflows read from and write to SharePoint? PA will probably make these easier to write.
  • With which external systems, databases, and APIs will your workflows interact? Logic Apps include access to many more connectors. Verify that the ones you need exist in the platform you choose.

One choice is to begin writing your workflows with PA; and, if you find that you need something more robust, use the import/export functionality to migrate your flows to Logic Apps and begin using that tooling.

Tuesday, February 11, 2020 8:49:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, January 30, 2020

GCast 71:

Integrating Visual Studio Solution with Azure DevOps Repo

Learn how to configure your Visual Studio 2019 solution to integrate with an Azure DevOps repository.

ALM | Azure | DevOps | GCast | Screencast | Video | Visual Studio
Thursday, January 30, 2020 9:27:00 AM (GMT Standard Time, UTC+00:00)
# Monday, January 27, 2020

Episode 595

Tibi Covaci on Migrating to the Cloud

Tibi Covaci discusses strategies and factors companies need to consider when migrating their applications to the cloud.

Monday, January 27, 2020 8:02:00 AM (GMT Standard Time, UTC+00:00)
# Thursday, December 5, 2019

GCast 66:

Creating a Repo in Azure DevOps

How to create an Azure DevOps project and a code repo within that project.

Azure | DevOps | GCast | Screencast | Video
Thursday, December 5, 2019 9:10:00 AM (GMT Standard Time, UTC+00:00)
# Monday, September 30, 2019

Episode 578

Raj Krishnan on Azure Data Explorer

Raj Krishnan describes Azure Data Explorer - a highly-scalable, very fast in-memory data store formerly known as Kusto.

Monday, September 30, 2019 9:29:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, August 29, 2019

GCast 63:

Sentiment Analysis JavaScript Demo

In this video, I walk you through a JavaSript application that calls the Sentiment Analysis Cognitive Service.

Thursday, August 29, 2019 1:09:57 PM (GMT Daylight Time, UTC+01:00)
# Friday, August 23, 2019

GCast 62:

Sentiment Analysis Cognitive Service

This video explains the Sentiment Analysis service, which is part of the Text Analytics Cognitive Service.

Friday, August 23, 2019 4:47:19 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, August 21, 2019

A data warehouse ("DW") is an ideal tool for collecting and associated disparate data.

A data warehouse has been a part of Microsoft SQL Server for decades, so it's not surprising that it is also included in Microsoft Azure.

To create a SQL Data Warehouse in Azure, navigate to the Azure Portal, sign in, and click the [Create a resource] button (Fig. 1).

dw01-CreateANewResourceButton
Fig. 1

From the menu, select Databases | SQL Data Warehouse, as shown in Fig. 2

dw02-SqlDataWarehouse
Fig. 2

The "SQL Data Warehouse" dialog displays, allowing you to enter information about your new data warehouse, as shown in Fig. 3

dw03-NewSqlDataWarehouse
Fig. 3

At the "Subscription" field, select the subscription in which you wish to create this data warehouse. Most of you will have only one subscription.

At the "Resource group" field, select an existing resource group or click the "Create new" link to create a new resource group in which to add this data warehouse. A resource group is an organizational unit to keep together related Azure resources.

At the "Data warehouse name" field, enter a unique name for your warehouse.

The "Server" field lists all SQL servers in the selected subscription. Every data warehouse is stored in one SQL Server. Select the SQL Server for this DW or click the "Create new" link to create a new SQL Server.

Clicking "Create new" displays the "New server" blade, as shown in Fig. 4. In this blade, you can enter the server name, location, and admin login credentials for a new server.

dw04-NewServer
Fig. 4

Click the [Review + create] button to display the "Review + create" tab of the "SQL Data Warehouse" dialog, as shown in Fig. 5.

dw05-ReviewCreate
Fig. 5

Click the [Create] button to create a new SQL Data Warehouse. This process may take a few minutes (longer if you also chose to create a new server).

After the Data Warehouse creation is complete, you can navigate to its management page. The "Overview" blade is shown in Fig. 6.

dw06-Overview
Fig. 6

In this article, you learned how to create a new Azure SQL Data Warehouse.

Wednesday, August 21, 2019 3:00:00 PM (GMT Daylight Time, UTC+01:00)
# Friday, August 16, 2019

In the last article, I walked through the syntax of calling the Bing Spell Check service.

In this article, I will walk through a simple JavaScript application that calls this service.

If you want to follow along this sample is part of my Cognitive Services demos, which you can find on GitHub at https://github.com/DavidGiard/CognitiveSvcsDemos 

This project is found in the "SpellCheckDemo" folder.

Here is the main web page:

Listing 1:

<html>
<head>
    <title>Spell Check Demo</title>
    <script src="scripts/jquery-1.10.2.min.js"></script>
    <script src="scripts/script.js"></script>
    <script src="scripts/getkey.js"></script>
    <link rel="stylesheet" href="css/site.css">
</head>
 <body>
     <h1>Spell Check Demo</h1>
     <div>
         <textarea id="TextToCheck">Life ig buuutiful all the tyme
         </textarea>
     </div>
    <button id="SpellCheckButton">Check Spelling!</button>
     <div id="NewTextDiv"></div>
     <div id="OutputDiv"></div>

</body>
</html>
  

As you can see, the page consists of a text area with some misspelled text; a button; and 2 empty divs.

The page looks like this when rendered in a browser:

scjs01-PageOnLoad
Fig. 1

When the user clicks the button, we want to call the Spell Check service, sending it the text in the text area.

We want to display the values in the web service response in the OutputDiv div; and we want to display some of the raw information in the response in the NewTextDiv div.

Below is the screen after clicking the [Check Spelling] button

scjs02-PageAfterClick

Fig. 2

We need a reference to the outputDiv, so we can easily write to it.

Listing 2:

var outputDiv = document.getElementById("OutputDiv");
  

Next, we bind code to the button's click event, as shown in Listing 3.

Listing 3:

var spellCheckButton = document.getElementById("SpellCheckButton"); 
spellCheckButton.onclick = function () { 
    // Replace this with your Spell Check API key from Aure 
    var subscriptionKey = "xxxxxxxxxxxxxxxxxxxxxxxx"; 

    outputDiv.innerHTML = "Thinking...";

    var textToCheck = document.getElementById("TextToCheck").textContent; 
    var webSvcUrl = "https://api.cognitive.microsoft.com/bing/v7.0/spellcheck/?text=" + textToCheck; 
    webSvcUrl = webSvcUrl + "&mode=proof&mkt=en-US";

    var httpReq = new XMLHttpRequest(); 
    httpReq.open("GET", webSvcUrl, true); 
    httpReq.setRequestHeader("Ocp-Apim-Subscription-Key", subscriptionKey) 
    httpReq.setRequestHeader("contentType", "application/json") 
    httpReq.onload = onSpellCheckSuccess; 
    httpReq.onerror = onSpellCheckError; 
    httpReq.send(null); 
};
  

This code gets the text from the text area and makes an asynchronous HTTP GET request to the Spell Check API, passing the API key in the header. When the API sends a response, this will call the onSpellCheckSuccess or onSpellCheckError function, depending on the success of the call.

Listing 4 shows the onSpellCheckSuccess function:

Listing 4:

function onSpellCheckSuccess(evt) { 
    var req = evt.srcElement; 
    var resp = req.response; 
    var data = JSON.parse(resp);

    var flaggedTokens = data.flaggedTokens; 
    if (data.flaggedTokens.length > 0) { 
        var newText = document.getElementById("TextToCheck").textContent; 
        ; 
        var outputHtml = ""; 
         flaggedTokens.forEach(flaggedToken => { 
            var token = flaggedToken.token; 
            var tokenType = flaggedToken.type; 
            var offset = flaggedToken.offset; 
            var suggestions = flaggedToken.suggestions; 
            outputHtml += "<div>" 
            outputHtml += "<h3>Token: " + token + "</h3>"; 
            outputHtml += "Type: " + tokenType + "<br/>"; 
            outputHtml += "Offset: " + offset + "<br/>"; 
             outputHtml += "<div>Suggestions</div>"; 
            outputHtml += "<ul>";

            if (suggestions.length > 0) { 
                 suggestions.forEach(suggestion => { 
                     outputHtml += "<li>" + suggestion.suggestion; 
                     outputHtml += " (" + (suggestion.score * 100).toFixed(2) + "%)" 
                }); 
                outputHtml += "</ul>"; 
                outputHtml += "</div>";

                newText = replaceTokenWithSuggestion(newText, token, offset, suggestions[0].suggestion) 
            } 
            else { 
                 outputHtml += "<ul><li>No suggestions for this token</ul>"; 
            } 
        });

        newText = "<h2>New Text:</h2>" + newText; 
        var newTextDiv = document.getElementById("NewTextDiv"); 
        newTextDiv.innerHTML = newText;

        outputHtml = "<h2>Details</h2>" + outputHtml; 
        outputDiv.innerHTML = outputHtml;

    } 
    else { 
        outputDiv.innerHTML = "No errors found."; 
    } 
};
  

As you can see, we parse out the JSON object from the response and retrieve each flaggedToken from that object. For each flaggedToken, we output information, such as the original text (or token), the tokenType, and suggested replacements, along with the score of each replacement.

If an error occurs when calling the API service, the onSpellCheckError function is called, as shown in Listing 5.

Listing 5:

function onSpellCheckError(evt) { 
    outputDiv.innerHTML = "An error has occurred!!!"; 
};
  

Finally, we replace each token with the first suggestion, using the code in Listing 6.

Listing 6*:

function replaceTokenWithSuggestion(originalString, oldToken, offset, newWord) { 
    var textBeforeToken = originalString.substring(0, offset);

    var textAfterToken = ""; 
    if (originalString.length > textBeforeToken.length + oldToken.length) { 
        textAfterToken = originalString.substring(offset + oldToken.length, originalString.length); 
    }

    var newString = textBeforeToken + newWord + textAfterToken;

    return newString; 
 }
  

Here is the full JavaScript:

Listing 7:

window.onload = function () {

    var outputDiv = document.getElementById("OutputDiv");
    // var subscriptionKey = getKey();

    var spellCheckButton = document.getElementById("SpellCheckButton");
    spellCheckButton.onclick = function () {
        var subscriptionKey = getKey();
        var textToCheck = document.getElementById("TextToCheck").textContent;

        var webSvcUrl = "https://api.cognitive.microsoft.com/bing/v7.0/spellcheck/?text=" + textToCheck;
        webSvcUrl = webSvcUrl + "&mode=proof&mkt=en-US";

        outputDiv.innerHTML = "Thinking...";

        var httpReq = new XMLHttpRequest();
        httpReq.open("GET", webSvcUrl, true);
        httpReq.setRequestHeader("Ocp-Apim-Subscription-Key", subscriptionKey)
        httpReq.setRequestHeader("contentType", "application/json")
        httpReq.onload = onSpellCheckSuccess;
        httpReq.onerror = onSpellCheckError;
        httpReq.send(null);
    };

    function onSpellCheckSuccess(evt) {
        var req = evt.srcElement;
        var resp = req.response;
        var data = JSON.parse(resp);

        var flaggedTokens = data.flaggedTokens;
        if (data.flaggedTokens.length > 0) {
            var newText = document.getElementById("TextToCheck").textContent;
            ;
            var outputHtml = "";
            flaggedTokens.forEach(flaggedToken => {
                var token = flaggedToken.token;
                var tokenType = flaggedToken.type;
                var offset = flaggedToken.offset;
                var suggestions = flaggedToken.suggestions;
                outputHtml += "<div>"
                outputHtml += "<h3>Token: " + token + "</h3>";
                outputHtml += "Type: " + tokenType + "<br/>";
                outputHtml += "Offset: " + offset + "<br/>";
                outputHtml += "<div>Suggestions</div>";
                outputHtml += "<ul>";

                if (suggestions.length > 0) {
                    suggestions.forEach(suggestion => {
                        outputHtml += "<li>" + suggestion.suggestion;
                        outputHtml += " (" + (suggestion.score * 100).toFixed(2) + "%)" 
                    });
                    outputHtml += "</ul>";
                    outputHtml += "</div>";

                    newText = replaceTokenWithSuggestion(newText, token, offset, suggestions[0].suggestion)
                }
                else {
                    outputHtml += "<ul><li>No suggestions for this token</ul>";
                }
            });

            newText = "<h2>New Text:</h2>" + newText;
            var newTextDiv = document.getElementById("NewTextDiv");
            newTextDiv.innerHTML = newText;

            outputHtml = "<h2>Details</h2>" + outputHtml;
            outputDiv.innerHTML = outputHtml;

        }
        else {
            outputDiv.innerHTML = "No errors found.";
        }
    };

    function onSpellCheckError(evt) {
        outputDiv.innerHTML = "An error has occurred!!!";
    };

    function replaceTokenWithSuggestion(originalString, oldToken, offset, newWord) {
        var textBeforeToken = originalString.substring(0, offset);

        var textAfterToken = "";
        if (originalString.length > textBeforeToken.length + oldToken.length) {
            textAfterToken = originalString.substring(offset + oldToken.length, originalString.length);
        }

        var newString = textBeforeToken + newWord + textAfterToken;

        return newString;
    }

};
  

Hopefully, this sample gives you an idea how to get started building your first app that uses the Bing Spell Check API.



* This code currently has a bug in it: It only works if each suggestion is the same length as the token it replaces. I plan to fix this bug, but I'm publishing now because:

  1. It is not a fatal bug and
  2. It is not relevant to the call to the API, which is the primary point I'm showing in this article.
Friday, August 16, 2019 9:00:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, August 14, 2019

In the last article, I showed how to create a Bing Spell Check service in Azure. Once you have created this service, you can now pass text to a web service to perform spell checking.

Given a text sample, the service checks the spelling of each token in the sample. A token is a word or two word that should be a single word, such as "arti cle", which is a misspelling of the word "article".

It returns an array of unrecognized tokens, along with suggested replacements for these misspelled tokens.

URL and querystring arguments

The URL for the web service is
https://api.cognitive.microsoft.com/bing/v7.0/spellcheck

You can add some optional querystring parameters to this URL:

mode
Set this to "proof" if you want to check for spelling, grammar, and punctuation errors
Set it to "spell" if you only want to check for spelling errors.

If you omit the "mode" querystring argument, it defaults to "proof".

mkt
Set this to the Market Code of the country/language/culture you want to test. This is in the format [Language Code]-[Country Code], such as "en-US" for United States English. A full list of Market Codes can be fond here.

The "Proof" mode supports only en-US,  es-ES, and pt-BR Market Codes.

If you omit the mkt argument, the service will guess the market based on the text. Therefore, it is a good idea to include this value, even though it is optional.

Below is an example of a URL with some querystring values set.

https://api.cognitive.microsoft.com/bing/v7.0/spellcheck?mode=proof&mkt=en-us

POST vs GET

You have the option to submit either an HTTP POST or an HTTP GET request to the URL. We will discuss the differences below.

If you use the GET verb, you pass the text to check in the querystring, as in the following example:

https://api.cognitive.microsoft.com/bing/v7.0/spellcheck?mode=proof&mkt=en-us&text=Life+ig+buuutifull+all+the+tyme

With the GET method, the text is limited to 1,500 characters

If you use the POST verb, the text is passed in the body of the request, as in the following example:

text=Life+ig+buuutifull+all+the+tyme

With the POST method, you can send text up to 10,000 characters long.

Results

If successful, the web service will return an HTTP 200 ("OK") response, along with the following data in JSON format in the body of the response:

_type: "SpellCheck"

An array of "flaggedTokens", representing spelling errors found

Each flaggedToken consists of the following information:

  • offset: The position of the offending token within the text
  • token: The token text
  • type: The reason this token is in this list (usually "UnknownToken")
  • suggestion: An array of suggested replacements for the offending token. Each suggestion consists of the following:
  • score: a value (0-1) indicating the likelihood that this suggestion is the appropriate replacement

Below is an example of a response:

{
   "_type": "SpellCheck",
   "flaggedTokens": [{
     "offset": 5,
     "token": "ig",
     "type": "UnknownToken",
     "suggestions": [{
       "suggestion": "is",
       "score": 0.8922398888897022
     }]
   }, {
     "offset": 8,
     "token": "buuutifull",
     "type": "UnknownToken",
     "suggestions": [{
       "suggestion": "beautiful",
       "score": 0.8922398888897022
     }]
   }, {
     "offset": 27,
     "token": "tyme",
     "type": "UnknownToken",
     "suggestions": [{
       "suggestion": "time",
       "score": 0.8922398888897022
     }]
   }]
 }
  

In this article, I showed how to call the Bing Spell Check service with either a GET or POST HTTP request.

Wednesday, August 14, 2019 8:53:00 AM (GMT Daylight Time, UTC+01:00)

The Bing Spell Check API allows you to call a simple web service to perform spell checking on your text.

Before you get started, you must log into a Microsoft Azure account and create a new Bing Spell Check Service. Here are the steps to do this:

In the Azure Portal, click the [Create a resource] button (Fig. 1); then, search for and select "Bing Spell Check", as shown in Fig. 2.

sc01-CreateResourceButton
Fig. 1

sc02-SearchForBingSpellCheck
Fig. 2

The "Bing Spell Check" (currently on version 7) page displays, which describes the service and provides links to documentation and information about the service, as shown in Fig. 3

sc03-BingSpellCheckLandingPage
Fig. 3

Click the [Create] button to open the "Create" blade, as shown in Fig. 4.

sc04-CreateSpellCheckBlade
Fig. 4

At the "Name" field, enter a unique name for your service.

At the "Subscription" dropdown, select the subscription in which to create the service. Most of you will have only one subscription.

At the "Pricing Tier" dropdown, select the free or paid tier, as shown in Fig. 5.

sc05-PricingTiers
Fig. 5

The number of calls are severely limited for the free tier, so this is most useful for testing and learning the service. You may only create one free Spell Check service per subscription.

At the "Resource Group" field, select a resource group to associate with this service or click the "Create new" link to associate it with a newly-created resource group. A resource group provides a way to group together related service, making it easier to manage them together.

Click the [Create] button to begin creating the service. This process takes only a few seconds.

Open the service and select the "Keys" blade, as shown in Fig. 6.

sc06-KeysBlade
Fig. 6

Either one of the keys listed on this page must be passed in the header of your web service call.

Save a copy of one of these keys. You will need it when I show you how to call the Bing Spell Check Service in tomorrow’s article.

Wednesday, August 14, 2019 1:46:16 AM (GMT Daylight Time, UTC+01:00)
# Monday, July 29, 2019

Episode 573

Ruth Yakubu on Machine Learning tools in Azure

Cloud Developer Advocate Ruth Yakubu describes Machine Learning Services and other new ML tools available in Azure.

Monday, July 29, 2019 8:48:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, July 25, 2019

GCast 58:

Creating and Deploying Azure Resources with ARM Templates

Learn how to generate an ARM template and use it to create and deploy resources to Azure.

Azure | DevOps | GCast | Screencast | Video
Thursday, July 25, 2019 10:34:22 PM (GMT Daylight Time, UTC+01:00)
# Thursday, July 18, 2019

GCast 57:

Azure Data Factory GitHub Deployment

Learn how to set up automated deployment from a GitHub repository to an Azure Data Factory.

Azure | GCast | GitHub | Screencast | Video
Thursday, July 18, 2019 11:53:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, July 17, 2019

In a recent article, I introduced you to the "Recognize Text" API that returns the text in an image - process known as "Optical Character Recognition", or "OCR".

In this article, I will show how to call this API from a .NET application.

Recall that the "Recognize Text" API consists of two web service calls:

We call the "Recognize Text" web service and pass an image to begin the process.

We call the "Get Recognize Text Operation Result" web service to check the status of the processing and retrieive the resulting text, when the process is complete.

The sample .NET application

If you want to follow along, the code is available in the RecognizeTextDemo found in this GitHub repository.

To get started, you will need to create a Computer Vision key, as described here.

Creating this service gives you a URI endpoint to call as a web service, and an API key, which must be passed in the header of web service calls.

The App

To run the app, you will need to copy the key created above into the App.config file. Listing 1 shows a sample config file:

Listing 1:

<configuration>
   <appSettings>
     <add key="ComputerVisionKey" value="5070eab11e9430cea32254e3b50bfdd5" />
   </appSettings>
 </configuration>
  

You will also need an image with some text in it. For this demo, we will use the image shown in Fig. 1.

rt01-Kipling
Fig. 1

When you run the app, you will see the screen in Fig. 2.

rt02-Form1
Fig. 2

Press the [Get File] button and select the saved image, as shown in Fig. 3.

rt03-SelectImage
Fig. 3

Click the [Open] button. The Open File Dialog closes, the full path of the image is displays on the form, and the [Start OCR] button is enabled, as shown in Fig. 4.

rt04-Form2
Fig. 4

Click the [Start OCR] button to call a service that starts the OCR. If an error occurs, it is possible that you did not configure the key correctly or that you are not connected to the Internet.

When the service call returns, the URL of the "Get Text" service displays (beneath the "Location Address" label), and the [Get Text] button is enabled, as shown in Fig. 5.

rt05-Form3
Fig. 5

Click the [Get Text] button. This calls the Location Address service and displays the status. If the status is "Succeeded", it displays the text in the image, as shown in Fig. 6.

rt06-Form4
Fig. 6

## The code

Let's take a look at the code in this application. It is all written in C#. The relevant parts are the calls to the two web service: "Recognize Text" and "Get Recognize Text Operation Result". The first call kicks off the OCR job; the second call returns the status of the job and returns the text found, when complete.

The code is in the TextService static class.

This class has a constant: visionEndPoint, which is the base URL of the Computer Vision Cognitive Service you created above. The code in the repository is in Listing 2. You may need to modify the URL, if you created your service in a different region.

Listing 2:

const string visionEndPoint = "https://westus.api.cognitive.microsoft.com/";
  

### Recognize Text

The call to the "Recognize Text" API is in Listing 1:

Listing 3:

public static async Task<string> GetRecognizeTextOperationResultsFromFile(string imageLocation, string computerVisionKey)
{
    var cogSvcUrl = visionEndPoint + "vision/v2.0/recognizeText?mode=Printed";
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
    HttpResponseMessage response;
    // Convert image to a Byte array
    byte[] byteData = null;
    using (FileStream fileStream = new FileStream(imageLocation, FileMode.Open, FileAccess.Read))
    {
        BinaryReader binaryReader = new BinaryReader(fileStream);
        byteData = binaryReader.ReadBytes((int)fileStream.Length);
    }

    // Call web service; pass image; wait for response
    using (ByteArrayContent content = new ByteArrayContent(byteData))
    {
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        response = await client.PostAsync(cogSvcUrl, content);
    }

    // Read results
    RecognizeTextResult results = null;
    if (response.IsSuccessStatusCode)
    {
        var data = await response.Content.ReadAsStringAsync();
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
    }
    var headers = response.Headers;
    var locationHeaders = response.Headers.GetValues("Operation-Location");
    string locationAddress = "";
    IEnumerable<string> values;
    if (headers.TryGetValues("Operation-Location", out values))
    {
        locationAddress = values.First();
    }
    return locationAddress;
}
  

The first thing we do is construct the specific URL of this service call.

Then we use the System.Net.Http library to submit an HTTP POST request to this URL, passing in the image as an array of bytes in the body of the request. For more information on passing a binary file to a web service, see this article.

When the response returns, we check the headers for the "Operation-Location", which is the URL of the next web service to call. The URL contains a GUID that uniquely identifies this job. We save this for our next  call.

Get Recognize Text Operation Result

After kicking of the OCR, we need to call a different service to check the status and get the results. The code in Listing 4 does this.

Listing 4:

public static async Task<RecognizeTextResult> GetRecognizeTextOperationResults(string locationAddress, string computerVisionKey) 
 { 
    var client = new HttpClient(); 
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey); 
    var response = await client.GetAsync(locationAddress); 
    RecognizeTextResult results = null; 
    if (response.IsSuccessStatusCode) 
    { 
        var data = await response.Content.ReadAsStringAsync(); 
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data); 
    } 
    return results; 
 }
  

This code is much simpler because it is an HTTP GET and we don't need to pass anything in the request body.

We simply submit an HTTP GET request and use the Newtonsoft.Json libary to convert the response to a string.

Here is the complete code in the TextService class:

Listing 5:

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using TextLib.Models;

namespace TextLib
{

    public static class TextService
    {
        const string visionEndPoint = "https://westus.api.cognitive.microsoft.com/";

public static async Task<string> GetRecognizeTextOperationResultsFromFile(string imageLocation, string computerVisionKey)
{
    var cogSvcUrl = visionEndPoint + "vision/v2.0/recognizeText?mode=Printed";
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
    HttpResponseMessage response;
    // Convert image to a Byte array
    byte[] byteData = null;
    using (FileStream fileStream = new FileStream(imageLocation, FileMode.Open, FileAccess.Read))
    {
        BinaryReader binaryReader = new BinaryReader(fileStream);
        byteData = binaryReader.ReadBytes((int)fileStream.Length);
    }

    // Call web service; pass image; wait for response
    using (ByteArrayContent content = new ByteArrayContent(byteData))
    {
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        response = await client.PostAsync(cogSvcUrl, content);
    }

    // Read results
    RecognizeTextResult results = null;
    if (response.IsSuccessStatusCode)
    {
        var data = await response.Content.ReadAsStringAsync();
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
    }
    var headers = response.Headers;
    var locationHeaders = response.Headers.GetValues("Operation-Location");
    string locationAddress = "";
    IEnumerable<string> values;
    if (headers.TryGetValues("Operation-Location", out values))
    {
        locationAddress = values.First();
    }
    return locationAddress;
}

        public static async Task<RecognizeTextResult> GetRecognizeTextOperationResults(string locationAddress, string computerVisionKey)
        {
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
            var response = await client.GetAsync(locationAddress);
            RecognizeTextResult results = null;
            if (response.IsSuccessStatusCode)
            {
                var data = await response.Content.ReadAsStringAsync();
                results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
            }
            return results;
        }

    }
}
  

The remaining code

There is other code in this application to do things like select the file from disk and loop through the JSON to concatenate all the text; but this code is very simple and (hopefully) self-documenting. You may choose other ways to get the file and handle the JSON in the response.

In this article, I've focused on the code to manage the Cognitive Services calls and responses to those calls in order to get the text from a picture of text.

Wednesday, July 17, 2019 10:51:00 AM (GMT Daylight Time, UTC+01:00)
# Friday, July 12, 2019

From its earliest days, Microsoft Cognitive Services has had the ability to convert pictures of text into text - process known as Optical Character Recognition. I wrote about using this service here and here.

Recently, Microsoft released a new service to perform OCR. Unlike the previous service, which only requires a single web service call, this service requires two calls: one to pass an image and start the text recognition process; and other to ask the status of that text recognition process and return the transcribed text.

To get started, you will need to create a Computer Vision key, as described here.

Creating this service gives you a URI endpoint to call as a web service, and an API key, which must be passed in the header of web service calls.

Recognize Text

The first call is to the Recognize Text API. To call this API, send an HTTP POST to the following URL:

https://lllll.api.cognitive.microsoft.com/vision/v2.0/recognizeText?mode=mmmmm

where:

lllll is the location selected when you created the Computer Vision Cognitive Service in Azure; and

mmmmm is "Printed" if the image contains printed text, as from a computer or typewriter; or "Handwritten" if the image contains a picture of handwritten text.

The header of an HTTP request can include name-value pairs. In this request, include the following name-value pairs:

Name Value
Ocp-Apim-Subscription-Key The Computer Vision API key (from the Cognitive Service created above)
Content-Type "application/json", if you plan to pass a URL pointing to an image on the public web;
"application/octet-stream", if you are passing the actual image in the request body.
Details about the request body are described below.

You must pass the image or the URL of the image in the request body. What you pass must be consistent with the "Content-Type" value passed in the header.

If you set the Content-Type header value to "application/json", pass the following JSON in the request body:

{"url":"http://xxxx.com/xxx.xxx"}  

where http://xxxx.com/xxx.xxx is the URL of the image you want to analyze. This image must be accessible to Cognitive Service (e.g., it cannot be behind a firewall or password-protected).

If you set the Content-Type header value to "application/octet-stream", pass the binary image in the request body.

You will receive an HTTP response to your POST. If you receive a response code of "202" ("Accepted"), this is an indication that the POST was successful, and the service is analyzing the image. An "Accepted" response will include the "Operation-Location in its header. The value of this header will contain a URL that you can use to query if the service has finished analyzing the image. The URL will look like the following:

https://lllll.api.cognitiveservices.microsoft.com/vision/v2.0/textOperations/gggggggg-gggg-gggg-gggg-gggggggggggg

where

lllll is the location selected when you created the Computer Vision Cognitive Service in Azure; and

gggggggg-gggg-gggg-gggg-gggggggggggg is a GUID that uniquely identifies the analysis job.

Get Recognize Text Operation Result

After you call the Recognize Text service, you can call the Get Recognize Text Operation Result service to determine if the OCR operation is complete.

To call this service, send an HTTP GET request to the "Operation-Location" URL returned in the request above.

In the header, send the following name-value pair:

Name Value
Ocp-Apim-Subscription-Key The Computer Vision API key (from the Cognitive Service created above)

This is the same value as in the previous request.

An HTTP GET request has no body, so there is nothing to send there.

If the request is successful, you will receive an HTTP "200" ("OK") response code. A successful response does not mean that the image has been analyzed. To know if it has been analyzed, you will need to look at the JSON object returned in the body of the response.

At the root of this JSON object is a property named "status". If the value of this property is "Succeeded", this indicates that the analysis is complete, and the text of the image will also be included in the same JSON object.

Other possible statuses are "NotStarted", "Running" and "Failed".

A successful status will include the recognized text in the JSON document.

At the root of the JSON (the same level as "status") is an object named "recognitionResult". This object contains a child object named "lines".

The "lines" object contains an array of anonymous objects, each of which contains a "boundingBox" object, a "text" object, and a "words" object. Each object in this array represents a line of text.

The "boundingBox" object contains an array of exactly 8 integers, representing the x,y coordinates of the corners an invisible rectangle around the line.

The "text" object contains a string with the full text of the line.

The "words" object contains an array of anonymous objects, each of which contains a "boundingBox" object and a "text" object. Each object in this array represents a single word in this line.

The "boundingBox" object contains an array of exactly 8 integers, representing the x,y coordinates of the corners an invisible rectangle around the word.

The "text" object contains a string with the word.

Below is a sample of a partial result:

{ 
  "status": "Succeeded", 
  "recognitionResult": { 
    "lines": [ 
      { 
        "boundingBox": [ 
          202, 
          618, 
          2047, 
          643, 
          2046, 
          840, 
          200, 
          813 
        ], 
        "text": "The walrus and the carpenter", 
         "words": [ 
          { 
            "boundingBox": [ 
               204, 
              627, 
              481, 
              628, 
              481, 
              830, 
              204, 
               829 
            ], 
            "text": "The" 
           }, 
          { 
            "boundingBox": [ 
              519, 
              628, 
              1057, 
              630, 
               1057, 
              832, 
              518, 
               830 
            ], 
           "text": "walrus" 
          }, 
          ...etc... 
  

In this article, I showed details of the Recognize Text API. In a future article, I will show how to call this service from code within your application.

Friday, July 12, 2019 2:00:09 PM (GMT Daylight Time, UTC+01:00)
# Thursday, July 11, 2019

GCast 56:

Azure Web App Deployment Slots

Deployment slots allow you to test changes to your web application in a production-like environment before deploying to production.

Azure | GCast | Screencast | Video | Web
Thursday, July 11, 2019 9:27:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, July 10, 2019

Azure Databricks is a web-based platform built on top of Apache Spark and deployed to Microsoft's Azure cloud platform that provides a web-based interface that makes it simple for users to create and scale clusters of Spark servers and deploy jobs and Notebooks to those clusters. Spark provides a general-purpose compute engine ideal for working with big data, thanks to its built-in parallelization engine.

In the last article in this series, I showed how to create a new Databricks Cluster in a Microsoft Azure Databricks Workspace.

In this article, I will show how  to create a notebook and run it on that cluster.

Navigate to the Databricks service, as shown in Fig. 1.

db01-OverviewBlade
Fig. 1

Click the [Launch Workspace] button (Fig. 2) to open the Azure Databricks page, as shown in Fig. 3.

db02-LaunchWorkspaceButton
Fig. 2

db03-DatabricksHomePage
Fig. 3

Click the "New Notebook" link under "Common Tasks" to open the "Create Notebook" dialog, as shown in Fig. 4.

db04-CreateNotebookDialog
Fig. 4

At the "Name" field, enter a name for your notebook. The name must be unique within this workspace.

At the "Language" dropdown, select the default language for your notebook. Current options are Python, Scala, SQL, and R. Selecting a language does not limit you to only using that language within this notebook. You can override the language in a given cell.

Click the [Create] button to create the new notebook. A blank notebook displays, as shown in Fig. 5.

db05-BlankNotebook
Fig. 5

Fig. 6 shows a notebook with some simple code added to the first 2 cells.

db06-Notebook
Fig. 6

You can add, move, or manipulate cells by clicking the cell menu at the top right of an existing cell, as shown in Fig. 7.

db07-AddCell
Fig. 7

In order to run your notebook, you will need to attach it to an existing, running cluster. Click the "Attach to" dropdown and select from the clusters in the current workspace, as shown in Fig. 8.  See this article for information on how to create a cluster.

db08-AttachCluster
Fig. 8

You can run all the cells in a notebook by clicking the "Run all" button in the toolbar, as shown in Fig. 9.

db09-RunAll
Fig. 9

Use the "Run" menu in the top right of a cell to run only that cell or the cells above or below it, as shown in Fig. 10.

db10-RunCell
Fig. 10

Fig. 11 shows a notebook after all cells have been run. Note the output displayed below each cell.

db11-NotebookWithResults
Fig. 11

In this article, I showed how to create, run, and manage a notebook in an Azure Databricks workspace.

Wednesday, July 10, 2019 9:20:00 AM (GMT Daylight Time, UTC+01:00)
# Tuesday, July 9, 2019

Azure Databricks is a web-based platform built on top of Apache Spark and deployed to Microsoft's Azure cloud platform that provides a web-based interface that makes it simple for users to create and scale clusters of Spark servers and deploy jobs and Notebooks to those clusters. Spark provides a general-purpose compute engine ideal for working with big data, thanks to its built-in parallelization engine.

In the last article in this series, I showed how to create a new Databricks service in Microsoft Azure.

A cluster is a set of compute nodes that can work together. All Databricks jobs run  in a cluster, so you will need to create one if you want to do anything with your Databricks service.

In this article, I will show how  to create a cluster in that service.

Navigate to the Databricks service, as shown in Fig. 1.

db01-OverviewBlade
Fig. 1

Click the [Launch Workspace] button (Fig. 2) to open the Azure Databricks page, as shown in Fig. 3.

db02-LaunchWorkspaceButton
Fig. 2

db03-DatabricksHomePage
Fig. 3

Click the "New Cluster" link to open the "Create Cluster" dialog, as shown in Fig. 4.

db04-CreateCluster
Fig. 4

At the "Cluster Name" field, enter a descriptive name for your cluster.

At the "Cluster Mode" dropdown, select "Standard" or "High Concurrency". The "High Concurrency" option can run multiple jobs concurrently.

At the "Databricks Runtime Version" dropdown, select the runtime version you wish to support on this cluster. I recommend selecting the latest non-beta version.

At the "Python Version" dropdown, select the version of Python you wish to support. New code will likely be written in version 3, but you may be running old notebooks written in version 2.

I recommend checking the "Enable autoscaling" checkbox. This allows the cluster to automatically spin up the number of nodes required for a  job, effectively balancing cost and performance.

I recommend checking the "Terminate after ___ minutes" checkbox and including a reasonable amount of time (I usually set this to 60 minutes) of inactivity to shut down your clusters. Running a cluster is an expensive operation, so you will save a lot of money if you shut them down when not in use. Because it takes a long time to spin up a cluster, consider how frequently a new job is required before setting this value too low. You may need to experiment with this value to get it right for your situation.

At the "Worker Type" node, select the size of machines to include in your cluster. If you enabled autoscaling, you can set the minimum and maximum worker nodes as well. If you did not enable autoscaling, you can only set the number of worker nodes. My experience is that more nodes and smaller machines tends to be more cost-effective than fewer nodes and more powerful machines; but you may want to experiment with your jobs to find the optimum setting for your organization.

At the "Driver Type" dropdown, select "Same as worker".

You can expand the "Advanced Options" section to pass specific data to your cluster, but this is usually not necessary.

Click the [Create Cluster] button to create this cluster. It will take a few minutes to create and start a new cluster.

When the cluster is created, you will see it listed, as shown in Fig. 5, with a state of "Running".

db05-Clusters
Fig. 5

You are now ready to create jobs and run them on this cluster. I will cover this in a future article.

In this article, you learned how to create a cluster in an existing Azure Databricks workspace.

Tuesday, July 9, 2019 9:37:00 AM (GMT Daylight Time, UTC+01:00)
# Friday, July 5, 2019

Azure Databricks is a web-based platform built on top of Apache Spark and deployed to Microsoft's Azure cloud platform.

Databricks provides a web-based interface that makes it simple for users to create and scale clusters of Spark servers and deploy jobs and Notebooks to those clusters. Spark provides a general-purpose compute engine ideal for working with big data, thanks to its built-in parallelization engine.

Apache Spark is open source and Databricks is owned by the Databricks company; but, Microsoft adds value by providing the hardware and fabric on which these tools are deployed, including providing capacity on which to scale and built-in fault tolerance.

To create an Azure Databricks environment, navigate to the Azure Portal, log in, and click the [Create Resource] button (Fig. 1).

db01-CreateResourceButton
Fig. 1

From the menu, select Analytics | Azure Databricks, as shown in Fig. 2.

db02-NewDataBricksMenu
Fig. 2

The "Azure Databricks service" blade displays, as shown in Fig. 3.

db03-NewDataBricksBlade
Fig. 3

At the "Workspace name" field, enter a unique name for the Databricks workspace you will create.

At the "Subscription" field, select the subscription associated with this workspace. Most of you will have only one subscription.

At the "Resource group" field, click the "Use existing" radio button and select an existing Resource Group from the dropdown below; or click the "Create new" button and enter the name and region of a new Resource Group when prompted.

At the "Location" field, select the location in which to store your workspace. Considerations include the location of the data on which you will be working and the location of developers and users who will access this workspace.

At the "Pricing Tier" dropdown, select the desired pricing tier. The Pricing Tier options are shown in Fig. 4.

db04-PricingTier
Fig. 4

If you wish to deploy this workspace to a particular virtual network, select "Yes" radio button at this question.

When completed, the blade should look similar to Fig. 5.

db05-NewDataBricksBlade-Completed
Fig. 5

Click the [Create] button to create the new Databricks service. This may take a few minutes.

Navigate to the Databricks service, as shown in Fig. 6.

db06-OverviewBlade
Fig. 6

Click the [Launch Workspace] button (Fig. 7) to open the Azure Databricks page, as shown in Fig. 8.

db07-LaunchWorkspaceButton
Fig. 7

db08-DatabricksHomePage
Fig. 8

In this article, I showed you how  to create a new Azure Databricks service. In future articles, I will show how to create clusters, notebooks, and otherwise make use of your Databricks service.

Friday, July 5, 2019 9:00:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, July 4, 2019

GCast 55:

GitHub Deployment to an Azure Web App

Learn how to set up automated deployment from a GitHub repository to an Azure Web App

Thursday, July 4, 2019 9:58:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, July 3, 2019

Source control is an important part of software development - from collaborating with other developers to enabling continuous integration and continuous deployment to providing the ability to roll back changes.

Azure Data Factory (ADF) provides the ability to integrate with source control systems GitHub or Azure DevOps.

I will walk you through doing this, using GitHub.

Before you get started, you must have the following:

A GitHub account (Free at https://github.com)

A GitHub repository created in your account, with at least one file in it. You can easily add a "readme.md" file to a repository from within the GitHub portal.

Create an ADF service, as described in this article.

Open the "Author & Monitor" page (Fig. 1) and click the "Set up Code Repository" button (Fig. 2)

ar01-ADFOverviewPage
Fig. 1

ar02-SetupCodeRepositoryButton
Fig. 2

The "Repository Settings" blade displays, as shown in Fig. 3.

ar03-RepositoryType
Fig. 3

At the "Repository Type", dropdown, select the type of source control you are using. The current options are "Azure DevOps Git" and "GitHub". For this demo, I have selected "GitHub".

When you select a Repository type, the rest of the dialog expands with prompts relevant to that type. Fig. 4 shows the prompts when you select "GitHub".

ar04-RepositoryName
Fig. 4

I don't have a GitHub Enterprise account, so I left this checkbox unchecked.

At the "GitHub Account" field, enter the name of your GitHub account. You don't need the full URL - just the name. For example, my GitHub account name is "davidgiard", which you can find online at https://github.com/davidgiard; so, I entered "davidgiard" into the "GitHub Account" field.

The first time you enter this account, you may be prompted to sign in and to authorize Azure to access your GitHub account.

Once you enter a valid GitHub account, the "Git repository name" dropdown is populated with a list of your repositories. Select the repository you created to hold your ADF assets.

After you select a repository, you are prompted for more specific information, as shown in Fig. 5

ar05-RepositorySettings
Fig. 5

At the "Collaboration branch", select "master". If you are working in a team environment or with multiple releases, it might make sense to check into a different branch in order control when changes are merged. To do this, you will need to create a new branch in GitHub.

At the "Root folder", select a folder of the repository in which to store your ADF assets. I typically leave this at "/" to store everything in the root folder; but, if you are storing multiple ADF services in a single repository, it might make sense to organize them into separate folders.

Check the "Import existing Data Factory resources to repository" checkbox. This causes any current assets in this ADF asset to be added to the repository as soon as you save. If you have not yet created any pipelines, this setting is irrelevant.

At the "Branch to import resources into" radio buttons, select "Use Collaboration".

Click the [Save] button to save your changes and push any current assets into the GitHub repository.

Within seconds, any pipelines, linked services, or datasets in this ADF service will be pushed into GitHub. You can refresh the repository, as shown in Fig. 6.

ar06-GitHub
Fig. 6

Fig. 7 shows a pipeline asset. Notice that it is saved as JSON, which can easily be deployed to another server.

ar07-GitHub
Fig. 7

In this article, you learned how to connect your ADF service to a GitHub repository, storing and versioning all ADF assets in source control.

Wednesday, July 3, 2019 6:56:40 PM (GMT Daylight Time, UTC+01:00)
# Monday, July 1, 2019

Episode 570

Laurent Bugnion on Migrating Data to Azure

Laurent Bugnion describes how he migrated from on-premise MongoDB and SQL Server databases to CosmosDB and Azure SQL Database running in Microsoft Azure, using both native tools and the Database migration service.

Monday, July 1, 2019 9:39:00 AM (GMT Daylight Time, UTC+01:00)
# Friday, June 28, 2019

Azure Data Factory (ADF) is an example of an Extract, Transform, and Load (ETL) tool, meaning that it is designed to extract data from a source system, optionally transform its format, and load it into a different destination system.

The source and destination data can reside in different locations, in different data stores, and can support different data structures.

For example, you can extract data from an Azure SQL database and load it into an Azure Blob storage container.

To create a new Azure Data Factory, log into the Azure Portal, click the [Create a resource] button (Fig. 1) and select Integration | Data Factory from the menu, as shown in Fig. 2.

df01-CreateResource
Fig. 1

df02-IntegrationDataFactory
Fig. 2

The "New data factory" blade displays, as shown in Fig. 3.

df03-NewDataFactory
Fig. 3

At the "Name" field, enter a unique name for this Data Factory.

At the Subscription dropdown, select the subscription with which you want to associate this Data Factory. Most of you will only have one subscription, making this an easy choice.

At the "Resource Group" field, select an existing Resource Group or create a new Resource Group which will contain your Data Factory.

At the "Version" dropdown, select "V2".

At the "Location" dropdown, select the Azure region in which you want your Data Factory to reside. Consider the location of the data with which it will interact and try to keep the Data Factory close to this data, in order to reduce latency.

Check the "Enable GIT" checkbox, if you want to integrate your ETL code with a source control system.

After the Data Factory is created, you can search for it by name or within the Resource Group containing it. Fig. 4 shows the "Overview" blade of a Data Factory.

df04-OverviewBlade
Fig. 4

To begin using the Data Factory, click the [Author & Monitor] button in the middle of the blade.

The "Azure Data Factory Getting Started" page displays in a new browser tab, as shown in Fig. 5.

df05-GetStarted
Fig. 5

Click the [Copy Data] button (Fig. 6) to display, the "Copy Data" wizard, as shown in Fig. 7.

df06-CopyDataIcon
Fig. 6

df07-Properties
Fig. 7

This wizard steps you through the process of creating a Pipeline and its associated artifacts. A Pipeline performs an ETL on a single source and destination and may be run on demand or on a schedule.

At the "Task name" field, enter a descriptive name to identify this pipeline later.

Optionally, you can add a description to your task.

You have the option to run the task on a regular or semi-regular schedule (Fig. 8); but you can set this later, so I prefer to select "Run once now" until I know it is working properly.

df08-Schedule
Fig. 8

Click the [Next] button to advance to the "Source data store" page, as shown in Fig. 9.

df09-Source
Fig. 9

Click the [+ Create new connection] button to display to the "New Linked Service" dialog, as shown in Fig. 10.

df10-NewLinkedService
Fig.10

This dialog lists all the supported data stores.
At the top of the dialog is a search box and a set of links, which allow you to filter the list of data stores, as shown in Fig. 11.

df11-AzureSql
Fig. 11

Fig. 12 shows the next dialog if you select Azure SQL Database as your data source.

df12-AzureSqlDetails
Fig. 12

In this dialog, you can enter information specific to the database from which you are extracting data. When complete, click the [Test connection] button to verify your entries are correct; then click the [Finish] button to close the dialog.

After successfully creating a new connection, the connection appears in the "Source data store" page, as shown in Fig. 13.

df13-Source
Fig. 13

Click the [Next] button to advance to the next page in the wizard, which asks questions to specific to the type of data in your data source. Fig. 14 shows the page for Azure SQL databases, which allows you to select which tables to extract.

df14-SelectTables
Fig. 14

Click the [Next] button to advance to the "Destination data store", as shown in Fig. 15.

df15-Destination
Fig. 15

Click the [+ Create new connection] button to display the "New Linked Service" dialog, as shown in Fig. 16.

df16-NewLinkedService
Fig. 16

As with the source data connection, you can filter this list via the search box and top links, as shown in Fig. 17. Here we are selecting Azure Data Lake Storage Gen2 as our destination data store.

df17-NewLinkedService-ADL
Fig. 17

After selecting a service, click the [Continue] button to display a dialog requesting information about the data service you selected. Fig. 18 shows the page for Azure Data Lake. When complete, click the [Test connection] button to verify your entries are correct; then click the [Finish] button to close the dialog.

df18-ADLDetails
Fig. 18

After successfully creating a new connection, the connection appears in the "Destination data store" page, as shown in Fig. 19.

df19-Destination
Fig. 19

Click the [Next] button to advance to the next page in the wizard, which asks questions to specific to the type of data in your data destination. Fig. 20 shows the page for Azure Data Lake, which allows you to select the destination folder and file name.

df20-ChooseOutput
Fig. 20

Click the [Next] button to advance to the "File format settings" page, as shown in Fig. 21.

df21-FileFormatSettings
Fig. 21

At the "File format" dropdown, select a format in which to structure your output file. The prompts change depending on the format you select. Fig.  21 shows the prompts for a Text format file.

Complete the page and click the [Next] button to advance to the "Settings" page, as shown in Fig. 22.

df22-Settings
Fig. 22

The important question here is "Fault tolerance". When an error occurs, do you want to abort the entire activity, skipping the remaining records or do you want to log the error, skip the bad record, and continue with the remaining records.

Click the [Next] button to advance to the "Summary" page as shown in Fig. 23.

df23-Summary
Fig. 23

This page lists the selections you have made to this point. You may edit a section if you want to change any settings. When satisfied with your changes, click the [Next] button to kick off the activity and advance to the "Deployment complete" page, as shown in Fig. 24.

df24-DeploymentComplete
Fig. 24

You will see progress of the major steps in  this activity as they run. You can click the [Monitor] button to see a more detailed real-time progress report or you can click the [Finish] button to close the wizard.

In this article, you learned about the Azure Data Factory and how to create a new data factory with an activity to copy data from a source to a destination.

Friday, June 28, 2019 9:04:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, June 27, 2019

GCast 54:

Azure Storage Replication

Learn about the data replication options in Azure Storage and how to set the option appropriate for your needs.

Azure | Database | GCast | Screencast | Video
Thursday, June 27, 2019 4:16:00 PM (GMT Daylight Time, UTC+01:00)
# Tuesday, June 25, 2019

Data Lake storage is a type of Azure Storage that supports a hierarchical structure.

There are no pre-defined schemas in a Data Lake, so you have a lot of flexibility on the type of data you want to store. You can store structured data or unstructured data or both. In fact, you can store data of different data types and structures in the same Data Lake.

Typically a Data Lake is used for ingesting raw data in order to preserve that data in its original format. The low cost, lack of schema enforcement, and optimization for inserts make it ideal for this. From the Microsoft docs: "The idea with a data lake is to store everything in its original, untransformed state."

After saving the raw data, you can then use ETL tools, such as SSIS or Azure Data Factory to copy and/or transform this data in a more usable format in another location.

Like most solutions in Azure, it is inherently highly scalable and highly reliable.

Data in Azure Data Lake is stored in a Data Lake Store.

Under the hood, a Data Lake Store is simply an Azure Storage account with some specific properties set.

To create a new Data Lake storage account, navigate to the Azure Portal, log in, and click the [Create a Resource] button (Fig.1).

dl01-CreateResource
Fig. 1

From the menu, select Storage | Storage Account, as shown in Fig. 2.

dl02-MenuStorageAccount
Fig. 2

The "Create Storage Account" dialog with the "Basic" tab selected displays, as shown in Fig. 3.

dl03-Basics
Fig. 3

At the “Subscription” dropdown, select the subscription with which you want to associate this account. Most of you will have only one subscription.

At the "Resource group" field, select a resource group in which to store your service or click "Create new" to store it in a newly-created resource group. A resource group is a logical container for Azure resources.

At the "Storage account name" field, enter a unique name for the storage account.

At the "Location" field, select the Azure Region in which to store this service. Consider where the users of this service will be, so you can reduce latency.

At the "Performance" field, select the "Standard" radio button. You can select the "Premium" performance button to achieve faster reads; however, there may be better ways to store your data if performance is your primary objective.

At the "Account kind" field, select "Storage V2"

At the "Replication" dropdown, select your preferred replication. Replication is explained here.

At the "Access tier" field, select the "Hot" radio button.

Click the [Next: Advanced>] button to advance to the "Advanced" tab, as shown in Fig. 4.

dl04-Advanced
Fig. 4

The important field on this tab is "Hierarchical namespace". Select the "Enabled" radio button at this field.

Click the [Review + Create] button to advance to the "Review + Create" tab, as shown in Fig. 5.

dl05-Review
Fig. 5

Verify all the information on this tab; then click the [Create] button to begin creating the Data Lake Store.

After a minute or so, a storage account is created. Navigate to this storage account and click the [Data Lake Gen2 file systems] button, as shown in Fig. 6.

dl06-Services
Fig. 6

The "File Systems" blade displays, as shown in Fig. 7.

dl07-FileSystem
Fig. 7

Data Lake data is partitioned into file systems, so you must create at least one file system. Click the [+ File System] button and enter a name for the file system you wish to create, as shown in Fig. 8.

dl08-AddFileSystem
Fig. 8

Click the [OK] to add  this file system and close the dialog. The newly-created file system displays, as shown in Fig. 9.

dl09-FileSystem
Fig. 9

If you double-click the file system in the list, a page displays where you can set access control and read about how to manage the files in this Data Lake Storage, as shown in Fig. 10

dl10-FileSystem
Fig. 10

In this article, you learned how to create a Data Lake Storage and a file system within it.

Tuesday, June 25, 2019 10:10:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, June 20, 2019

GCast 53:

Creating a Data Warehouse in Azure

Learn how to create a new SQL Sever data warehouse in Microsoft Azure.

Thursday, June 20, 2019 9:24:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, June 12, 2019

In a previous article, I showed how to use the Microsoft Cognitive Services Computer Vision API to perform Optical Character Recognition (OCR) on a document containing a picture of text. We did so by making an HTTP POST to a REST service.

If you are developing with .NET languages, such as C# Visual Basic, or F#, a NuGet Package makes this call easier. Classes in this package abstract the REST call, so can write less and simpler code; and strongly-typed objects allow you to make the call and parse the results more easily.


To get started, you will first need to create a Computer Vision service in Azure and retrieve the endpoint and key, as described here.

Then, you can create a new C# project in Visual Studio. I created a WPF application, which can be found and downloaded at my GitHub account. Look for the project named "OCR-DOTNETDemo". Fig. 1 shows how to create a new WPF project in Visual Studio.

od01-FileNewProject
Fig. 1

In the Solution Explorer, right-click the project and select "Manage NuGet Packages", as shown in Fig. 2.

od02-ManageNuGet
Fig. 2

Search for and install the "Microsoft.Azure.CognitiveServices.Vision.ComputerVision", as shown in Fig. 3.

od03-NuGet
Fig. 3

The important classes in this package are:

  • OcrResult
    A class representing the object returned from the OCR service. It consists of an array of OcrRegions, each of which contains an array of OcrLines, each of which contains an array of OcrWords. Each OcrWord has a text property, representing the text that is recognized. You can reconstruct all the text in an image by looping through each array.
  • ComputerVisionClient
    This class contains the RecognizePrintedTextInStreamAsync method, which abstracts the HTTP REST call to the OCR service.
  • ApiKeyServiceClientCredentials
    This class constructs credentials that will be passed in the header of the HTTP REST call.

Create a new class in the project named "OCRServices" and make its scope "internal" or "public"

Add the following "using" statements to the top of the class:

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System.IO;
  


Add the following methods to this class:

Listing 1:

internal static async Task<OcrResult> UploadAndRecognizeImageAsync(string imageFilePath, OcrLanguages language) 
 { 
    string key = "xxxxxxx"; 
    string endPoint = "https://xxxxx.api.cognitive.microsoft.com/"; 
    var credentials = new ApiKeyServiceClientCredentials(key);

    using (var client = new ComputerVisionClient(credentials) { Endpoint = endPoint }) 
    { 
        using (Stream imageFileStream = File.OpenRead(imageFilePath)) 
        { 
             OcrResult ocrResult = await client.RecognizePrintedTextInStreamAsync(false, imageFileStream, language); 
            return ocrResult; 
        } 
    } 
}

internal static async Task<string> FormatOcrResult(OcrResult ocrResult) 
{ 
    var sb = new StringBuilder(); 
    foreach(OcrRegion region in  ocrResult.Regions) 
    { 
        foreach (OcrLine line in region.Lines) 
        { 
             foreach (OcrWord word in line.Words) 
            { 
                 sb.Append(word.Text); 
                sb.Append(" "); 
            } 
            sb.Append("\r\n"); 
        } 
         sb.Append("\r\n\r\n"); 
    } 
    return sb.ToString(); 
}
  

The UploadAndRecognizeImageAsync method calls the HTTP REST OCR service (via the NuGet library extractions) and returns a strongly-typed object representing the results of that call. Replace the key and the endPoint in this method with those associated with your Computer Vision service.

The FormatOcrResult method loops through each region, line, and word of the service's return object. It concatenates the text of each word, separating words by spaces, lines by a carriage return and line feed, and regions by a double carriage return / line feed.

Add a Button and a TextBlock to the MainWindow.xaml form.

In the click event of that button add the following code.

Listing 2:

private async void GetText_Click(object sender, RoutedEventArgs e) 
{ 
    string imagePath = @"xxxxxxx.jpg"; 
    OutputTextBlock.Text = "Thinking…"; 
    var language = OcrLanguages.En; 
    OcrResult ocrResult =  await OCRServices.UploadAndRecognizeImageAsync(imagePath, language); 
     string resultText = await OCRServices.FormatOcrResult(ocrResult); 
    OutputTextBlock.Text = resultText; 
 }
  


Replace xxxxxxx.jpg with the full path of an image file on disc that contains pictures of text.

You will need to add the following using statement to the top of MainWindow.xaml.cs.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
  

If you like, you can add code to allow users to retrieve an image and display that image on your form. This code is in the sample application from my GitHub repository, if you want to view it.

Running the form should look something like Fig. 4.

od04-RunningApp
Fig. 4

Wednesday, June 12, 2019 9:46:00 AM (GMT Daylight Time, UTC+01:00)
# Tuesday, June 11, 2019

In a previous article, I described the details of the OCR Service, which is part of the Microsoft Cognitive Services Computer Vision API.

To make this API useful, you need to write some code and build an application that calls this service.

In this article, I will show an example of a JavaScript application that calls the OCR web service.

If you want to follow along, you can find all the code in the "OCRDemo" project, included in this set of demos.

To use this demo project, you will first need to create a Computer Vision API service, as described here.

Read the project's read.me file, which explains the setup you need to do in order to run this with your account.

If you open index.html in the browser, you will see that it displays an image of a poem, along with some controls on the left:

  • A dropdown list to change the poem image
  • A dropdown list to select the language of the poem text
  • A [Get Text] button that calls the web service.

Fig. 1 shows index.html when it first loads:

oj01-WebPage
Fig. 1

    Let's look at the JavaScript that runs when you click the [Get Text] button. You can find it in script.js

    print 'hello world!'$("#GetTextFromPictureButton").click(function () {
         var outputDiv = $("#OutputDiv");
         outputDiv.text("Thinking…");
         var url = $("#ImageUrlDropdown").val();
         var language = $("#LanguageDropdown").val();
    
        try {
             var computerVisionKey = getKey();
         }
         catch(err) {
             outputDiv.html(missingKeyErrorMsg);
             return;
         }
    
        var webSvcUrl = "https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/ocr";
        webSvcUrl = webSvcUrl + "?language=" + language;
        $.ajax({
            type: "POST",
            url: webSvcUrl,
            headers: { "Ocp-Apim-Subscription-Key": computerVisionKey },
            contentType: "application/json",
            data: '{ "Url": "' + url + '" }'
        }).done(function (data) {
            outputDiv.text("");
    
            var regionsOfText = data.regions;
            for (var r = 0; r < regionsOfText.length; h++) {
                var linesOfText = data.regions[r].lines;
                for (var l = 0; l < linesOfText.length; l++) {
                    var output = "";
    
                    var thisLine = linesOfText[l];
                    var words = thisLine.words;
                    for (var w = 0; w < words.length; w++) {
                        var thisWord = words[w];
                        output += thisWord.text;
                        output += " ";
                    }
                    var newDiv = "<div>" + output + "</div>";
                    outputDiv.append(newDiv);
    
                }
                outputDiv.append("<hr>");
            }
    
        }).fail(function (err) {
            $("#OutputDiv").text("ERROR!" + err.responseText);
        });
      

    This code uses jQuery to simplify selecting elements, but raw JavaScript would work just as well.

    On the page is an empty div with the id="OutputDiv"

    In the first two lines, we select this div and set its text to "Thinking…" while the web service is being called.

        var outputDiv = $("#OutputDiv");
        outputDiv.text("Thinking…");

    Next, we get the URL of the image containing the currently displayed poem and the selected language. These both come from the selected items of the two dropdowns.

        var url = $("#ImageUrlDropdown").val(); 
        var language = $("#LanguageDropdown").val();
      

    Then, we get the API key, which is in the getKey() function, which is stored in the getkey.js file. You will need to update this file yourself, adding your own key, as described in the read.me.

        try { 
            var computerVisionKey = getKey(); 
        } 
        catch(err) { 
            outputDiv.html(missingKeyErrorMsg); 
            return; 
        }
      

    Now, it's time to call the web service. My Computer Vision API service was created in the West Central US region, so I've hard-coded the URL. You may need  to change this, if you created your service in a different region.

    I add a querystring parameter to the URL to indicate the slected language.

    Then, I call the web service by submitting an HTTP POST request to the web service URL, passing in the appropriate headers and constructing a JSON document to pass in the request body.

        var webSvcUrl = "https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/ocr";
        webSvcUrl = webSvcUrl + "?language=" + language;
        $.ajax({
            type: "POST",
            url: webSvcUrl,
            headers: { "Ocp-Apim-Subscription-Key": computerVisionKey },
            contentType: "application/json",
            data: '{ "Url": "' + url + '" }'
      

    Finally, I process the results when the HTTP response returns.

    JavaScript is a dynamic language, so I don't need to create any classes to identify the structure of the JSON that is returned; I just need to know the names of each property.

    The returned JSON contains an array of regions; each region contains an array of lines; and each line contains an array of words.

    In this simple example, I simply loop through each word in each line in each region, concatenating them together and adding some HTML to format line breaks.

    Then, I append this HTML to the outputDiv and follow it up with a horizontal rule to emphasize that it is the end.

        }).done(function (data) { 
            outputDiv.text("");
    
            var regionsOfText = data.regions; 
            for (var r = 0; r < regionsOfText.length; h++) { 
                var linesOfText = data.regions[r].lines; 
                for (var l = 0; l < linesOfText.length; l++) { 
                     var output = "";
    
                    var thisLine = linesOfText[l]; 
                    var words = thisLine.words; 
                     for (var w = 0; w < words.length; w++) { 
                         var thisWord = words[w]; 
                        output += thisWord.text; 
                        output += " "; 
                    } 
                     var newDiv = "<div>" + output + "</div>"; 
                     outputDiv.append(newDiv);
    
                } 
                outputDiv.append("<hr>"); 
            }
      

    I also, catch errors that might occur, displaying a generic message in the outputDiv, where the returned text would have been.

        catch(err) { 
            outputDiv.html(missingKeyErrorMsg); 
            return; 
        }
      

    Fig. 2 shows the results after a successful web service call.

    oj02-Results
    Fig. 2

    Try this yourself to see it in action. The process is very similar in other languages.

    Tuesday, June 11, 2019 9:11:00 AM (GMT Daylight Time, UTC+01:00)
    # Monday, June 10, 2019

    Episode 567

    Elton Stoneman on Docker

    Elton Stoneman describes how to manage containers using Docker on a local machine and in the cloud.

    Monday, June 10, 2019 9:52:00 AM (GMT Daylight Time, UTC+01:00)
    # Friday, June 7, 2019

    The Microsoft Cognitive Services Computer Vision API contains functionality to infer a lot of information about a given image. One capability is to convert pictures of text into text, a process known as "Optical Characer Recognition" or "OCR".

    Performing OCR on an image is simple and inexpensive. It is done through a web service call; but first, you must set up the Computer Vision Service, as described in this article.

    In that article, you were told to save two pieces of information about the service: The API Key and the URL. Here is where you will use them.

    HTTP Endpoint

    The OCR service is a web service. To call it, you send an HTTP POST request to an HTTP endpoint. The endpoint consists of the URL copied above, followed by "vision/v2.0/ocr", followed by some optional querystring parameters (which we will discuss later).

    So, if you create your service in the EAST US Azure region, the copied URL will be

    https://eastus.api.cognitive.microsoft.com/

    and the HTTP endpoint for the OCR service will be

    https://eastus.api.cognitive.microsoft.com/vision/v2.0/ocr

    Querystring Parameters

    The optional querystring parameters are

    language:

    The 2-character language code of the text you are recognizing. This helps the service more accurately and quickly match pictures of words to the words they represent. If you omit this parameter, the system will analyze the text and guess an appropriate language. Currently, the service supports 26 languages. The 2-character code of each supported language is listed in Appendix 1 at the bottom of this article.

    detectOrientation

    "true", if you want the service to adjust the orientation of the image before performing OCR. If you pass "false" or omitting this parameter, the service will assume the image is oriented correctly.

    If you have an image with English text and you want the service to detect and adjust the image's orientation, the above URL becomes:

    https://eastus.api.cognitive.microsoft.com/vision/v2.0/ocr?language=en&detectOrientation=true

    HTTP Headers

    In the header of the HTTP request, you must add the following name/value pairs:

    Ocp-Apim-Subscription-Key

    The API key you copied above

    Content-Type

    The media type of the image you are passing to the service in the body of the HTTP request

    Possible values are:

    • application/json
    • application/octet-stream
    • multipart/form-data

    The value you pass must be consistent with the data in the body.

    If you select "application/json", you must pass in the request body a URL pointing to the image on the public Internet.

    If you select "application/json" or "application/octet-stream", you must pass the actual binary image in the request body.

    Body

    In the body of the HTTP request, you pass the image you want the service to analyze.

    If you selected "application/json" as the Content-Type in the header, pass a URL within a JSON document, with the following format:

    {"url":"image_url"}

    where image_url is a URL pointing to the image you want to recognize.

    Here is an example:

    {"url":"https://www.themeasuredmom.com/wp-content/uploads/2016/03/Slide11.png"}

    If you selected "application/octet-stream" or "multipart/form-data" as the Content-Type in the header, pass the actual binary image in the body of the request.

    The service has some restrictions on the images it can analyze.

    It cannot analyze an image larger than 4MB.

    The width and height of the image must be between 50 and 4,200 pixels

    The image must be one of the following formats: JPEG, PNG, GIF, BMP.

    Sample call with Curl

    Here is an example of a call to the service, using Curl:

    curl -v -X POST "https://eastus.api.cognitive.microsoft.com/vision/v2.0/ocr?language=en&detectOrientation=true" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: f27c7436c3a64d91a177111a6b594537" --data-ascii "{'url' : 'https://www.themeasuredmom.com/wp-content/uploads/2016/03/Slide11.png'}"

    (NOTE: I modified the key, so it will not work. You will need to replace it with your own key if you want this to work.)

    Response

    If all goes well, you will receive an HTTP 200 (OK) response.

    In the body of that response will be the results of the OCR in JSON format.

    At the top level is the language, textAngle, and orientation

    Below that is an array of 0 or more text regions. Each region represents a block of text within the image.

    Each region contains an array of 0 or more lines of text.

    Each line contains an array of 0 or more words.

    Each region, line, and word contains a bounding box, consisting of the left, top, width, and height of the word(s) within.

    Here is a partial example of the JSON returned from a successful web service call:

    {
        "language": "en",
        "textAngle": 0.0,
        "orientation": "Up",
        "regions": [
            {
                "boundingBox": "147,96,622,1095",
                "lines": [
                    {
                        "boundingBox": "408,96,102,56",
                        "words": [
                            {
                                "boundingBox": "408,96,102,56",
                                "text": "Hey"
                            }
                        ]
                    },
                    {
                        "boundingBox": "282,171,350,45",
                        "words": [
                            {
                                "boundingBox": "282,171,164,45",
                                "text": "Diddle"
                            },
                            {
                                "boundingBox": "468,171,164,45",
                                "text": "Diddle"
                            }
                        ]
                    },
                    etc...
                     }
                ]
            }
        ]
    }
      

    The full JSON can be found in Appendix 2 below.

    Errors

    If an error occurs, the response will not by HTTP 200. It will be an HTTP Response code greater than 400. Additional error information will be in the body of the response.

    Common errors include:

    • Images too large or too small
    • Image not found (It might require a password or be behind a firewall)
    • Invalid image format
    • Incorrect API key
    • Incorrect URL (It must match the API key. If you have multiple services, it’s easy to mix them up)
    • Miscellaneous spelling errors (e.g., not entering a valid language code or misspelling a header parameter)

    In this article, I showed how to call the Cognitive Services OCR Computer Vision Service.

    Appendix 1: Supported languages

    zh-Hans (ChineseSimplified)
    zh-Hant (ChineseTraditional)
    cs (Czech)
    da (Danish)
    nl (Dutch)
    en (English)
    fi (Finnish)
    fr (French)
    de (German)
    el (Greek)
    hu (Hungarian)
    it (Italian)
    ja (Japanese)
    ko (Korean)
    nb (Norwegian)
    pl (Polish)
    pt (Portuguese,
    ru (Russian)
    es (Spanish)
    sv (Swedish)
    tr (Turkish)
    ar (Arabic)
    ro (Romanian)
    sr-Cyrl (SerbianCyrillic)
    sr-Latn (SerbianLatin)
    sk (Slovak)

    Appendix 2: JSON Response Example

    {
        "language": "en",
        "textAngle": 0.0,
        "orientation": "Up",
        "regions": [
            {
                "boundingBox": "147,96,622,1095",
                "lines": [
                    {
                        "boundingBox": "408,96,102,56",
                        "words": [
                            {
                                "boundingBox": "408,96,102,56",
                                "text": "Hey"
                            }
                        ]
                    },
                    {
                        "boundingBox": "282,171,350,45",
                        "words": [
                            {
                                "boundingBox": "282,171,164,45",
                                "text": "Diddle"
                            },
                            {
                                "boundingBox": "468,171,164,45",
                                "text": "Diddle"
                            }
                        ]
                    },
                    {
                        "boundingBox": "239,336,441,46",
                        "words": [
                            {
                                "boundingBox": "239,336,87,46",
                                "text": "Hey"
                            },
                            {
                                "boundingBox": "359,337,144,35",
                                "text": "diddle"
                            },
                            {
                                "boundingBox": "536,337,144,35",
                                "text": "diddle"
                            }
                        ]
                    },
                    {
                        "boundingBox": "169,394,576,35",
                        "words": [
                            {
                                "boundingBox": "169,394,79,35",
                                "text": "The"
                            },
                            {
                                "boundingBox": "279,402,73,27",
                                "text": "cat"
                            },
                            {
                                "boundingBox": "383,394,83,35",
                                "text": "and"
                            },
                            {
                                "boundingBox": "500,394,70,35",
                                "text": "the"
                            },
                            {
                                "boundingBox": "604,394,141,35",
                                "text": "fiddle"
                            }
                        ]
                    },
                    {
                        "boundingBox": "260,452,391,50",
                        "words": [
                            {
                                "boundingBox": "260,452,79,35",
                                "text": "The"
                            },
                            {
                                "boundingBox": "370,467,80,20",
                                "text": "cow"
                            },
                            {
                                "boundingBox": "473,452,178,50",
                                "text": "jumped"
                            }
                        ]
                    },
                    {
                        "boundingBox": "277,509,363,35",
                        "words": [
                            {
                                "boundingBox": "277,524,100,20",
                                "text": "over"
                            },
                            {
                                "boundingBox": "405,509,71,35",
                                "text": "the"
                            },
                            {
                                "boundingBox": "509,524,131,20",
                                "text": "moon."
                            }
                        ]
                    },
                    {
                        "boundingBox": "180,566,551,49",
                        "words": [
                            {
                                "boundingBox": "180,566,79,35",
                                "text": "The"
                            },
                            {
                                "boundingBox": "292,566,103,35",
                                "text": "little"
                            },
                            {
                                "boundingBox": "427,566,82,49",
                                "text": "dog"
                            },
                            {
                                "boundingBox": "546,566,185,49",
                                "text": "laughed"
                            }
                        ]
                    },
                    {
                        "boundingBox": "212,623,493,51",
                        "words": [
                            {
                                "boundingBox": "212,631,42,27",
                                "text": "to"
                            },
                            {
                                "boundingBox": "286,638,72,20",
                                "text": "see"
                            },
                            {
                                "boundingBox": "390,623,96,35",
                                "text": "such"
                            },
                            {
                                "boundingBox": "519,638,20,20",
                                "text": "a"
                            },
                            {
                                "boundingBox": "574,631,131,43",
                                "text": "sport."
                            }
                        ]
                    },
                    {
                        "boundingBox": "301,681,312,35",
                        "words": [
                            {
                                "boundingBox": "301,681,90,35",
                                "text": "And"
                            },
                            {
                                "boundingBox": "425,681,70,35",
                                "text": "the"
                            },
                            {
                                "boundingBox": "528,681,85,35",
                                "text": "dish"
                            }
                        ]
                    },
                    {
                        "boundingBox": "147,738,622,50",
                        "words": [
                            {
                                "boundingBox": "147,753,73,20",
                                "text": "ran"
                            },
                            {
                                "boundingBox": "255,753,114,30",
                                "text": "away"
                            },
                            {
                                "boundingBox": "401,738,86,35",
                                "text": "with"
                            },
                            {
                                "boundingBox": "519,738,71,35",
                                "text": "the"
                            },
                            {
                                "boundingBox": "622,753,147,35",
                                "text": "spoon."
                            }
                        ]
                    },
                    {
                        "boundingBox": "195,1179,364,12",
                        "words": [
                            {
                                "boundingBox": "195,1179,45,12",
                                "text": "Nursery"
                            },
                            {
                                "boundingBox": "242,1179,38,12",
                                "text": "Rhyme"
                            },
                            {
                                "boundingBox": "283,1179,36,9",
                                "text": "Charts"
                            },
                            {
                                "boundingBox": "322,1179,28,12",
                                "text": "from"
                            },
                            {
                                "boundingBox": "517,1179,11,10",
                                "text": "C"
                            },
                            {
                                "boundingBox": "531,1179,28,9",
                                "text": "2017"
                            }
                        ]
                    },
                    {
                        "boundingBox": "631,1179,90,12",
                        "words": [
                            {
                                "boundingBox": "631,1179,9,9",
                                "text": "P"
                            },
                            {
                                "boundingBox": "644,1182,6,6",
                                "text": "a"
                            },
                            {
                                "boundingBox": "655,1182,7,9",
                                "text": "g"
                            },
                            {
                                "boundingBox": "667,1182,7,6",
                                "text": "e"
                            },
                            {
                                "boundingBox": "690,1179,31,12",
                                "text": "7144"
                            }
                        ]
                    }
                ]
            }
        ]
    }
      
    Friday, June 7, 2019 9:09:00 AM (GMT Daylight Time, UTC+01:00)
    # Thursday, June 6, 2019

    GCast 51:

    Creating an Azure Container Instance

    Learn how to create an Azure Container instance from a container repository.

    Azure | GCast | IAAS | Screencast | Video
    Thursday, June 6, 2019 9:15:00 AM (GMT Daylight Time, UTC+01:00)
    # Wednesday, June 5, 2019

    The Microsoft Cognitive Services Computer Vision API contains functionality to infer a lot of information about a given image.

    As of this writing, the API is on version 2.0 and supports the following capabilities:

    Analyze an Image

    Get general information about an image, such as the objects found, what each object is and where it is located. It can even identify potentially pornographic images.

    Analyze Faces

    Find the location of each face in a video and determine information about each face, such as is age, gender, and type of facial hair or glasses.

    Optical Character Recognition (OCR)

    Convert a picture of text into text

    Recognize Celebrities

    Recognize famous people from photos of their face

    Recognize Landmarks

    Recognize famous landmarks, such as the Statue of Liberty or Diamond Head Volcano.

    Analyze Video

    Retrieve keywords to describe a video at different points in time as it plays.

    Generate a Thumbnail

    Change the size and shape of an image, without cropping out the main subject.

    Getting Started

    To get started, you need to create a Computer Vision Service. To do this, navigate to the Azure Portal, login in, click the [Create a resource] button (Fig. 1) and enter "Computer Vision" in the Search box, as shown in Fig. 2.

    cv01-CreateResource
    Fig. 1

    cv02-SearchForComputerVision
    Fig. 2

    A dialog displays, with information about the Computer Vision Service, as shown in Fig. 3.

    cv03-ComputerVisionSplashPage
    Fig. 3

    Click the [Create] button to display the Create Computer Vision Service blade, as shown in Fig. 4.

    cv04-NewSvc
    Fig. 4

    At the "Name" field, enter a name by which you can easily identify this service. This name must be unique among your services, but need not be globally unique.

    At the "Subscription" field, select the Subscription with which you want to associate this service. Most of you will only have one subscription.

    At the "Location" field, select the Azure Region in which to store this service. Consider where the users of this service will be, so you can reduce latency.

    At the "Pricing tier" field, select "F0" to use this service for free or "S1" to incur a small charge for each call to the service. If you select the free service, you will be limited in the number and frequency of calls that can be made.

    At the "Resource group" field, select a resource group in which to store your service or click "Create new" to store it in a newly-created resource group. A resource group is a logical container for Azure resources.

    Click the [Create] button to create the Computer Vision service.

    Usually, it takes less than a minute to create a Computer Vision Service. When Azure has created this service, you can navigate to it by its name or the name of the resource group.

    Two pieces of information are critical when using the service: The Endpoint and the API keys.

    The Endpoint can be found on the service's Overview blade, as shown in Fig. 5.

    cv05-OverviewBlade
    Fig. 5

    The API Keys can be found on the service's "Keys" blade, as shown in Fig. 6. There are 2 keys, in case one key is compromised; you can use the other key, while the first is regenerated, in order to minimize downtime.

    cv06-KeysBlade
    Fig. 6

    Copy the URL and and one of the API keys. You will need it to call the web services. We will describe how to make specific calls in future articles.

    Wednesday, June 5, 2019 4:46:00 PM (GMT Daylight Time, UTC+01:00)
    # Monday, May 27, 2019

    Episode 564

    Eric Boyd on Microservices

    Eric Boyd describes the principles of Microservices and how he uses these principles to build better software.

    Links:

    Monday, May 27, 2019 9:23:00 AM (GMT Daylight Time, UTC+01:00)
    # Monday, May 20, 2019

    Episode 564

    David Makogon on Streaming Data

    David Makogon talks about streaming data and the tools to help you make it happen.

    David on Twitter

    Monday, May 20, 2019 9:10:00 AM (GMT Daylight Time, UTC+01:00)
    # Wednesday, May 8, 2019

    Keeping a computer system available all or almost all the time is a challenge.

    Sometimes software patches or upgrades need to be installed on a server. Sometimes, old hardware needs to be replaced. Sometimes, hardware unexpectedly fails. Sometimes power to building or part of a building fails.

    All these things can contribute to downtime - some of it planned and some of it unplanned.

    Monitoring, redundancy, and planning all reduce the risk of downtime in Azure.

    Many resources in Azure are written in triplicate. Only one copy of that data or service is live at any given time. The other two exist in case the live copy becomes unavailable. If this happens, Azure will automatically route requests to one of these "backup" copies. The live copy is sometimes called a "hot" copy, while the 2 redundant backups are sometimes referred to as "cold" copies.

    This works well during planned software and hardware upgrades.  The cold copies' servers are upgraded first; then, new requests are routed to one of the upgraded cold copies, making it the hot copy, before the original hot copy is upgraded. Azure maintains something called "Update Domains" to help manage this. Systems in separate Update Zones will not be shut down for upgrades simultaneously, in order to avoid downtime.

    Unexpected downtime is harder to manage. This is typically caused by hardware or software failure or a failure of a system, such as a power supply on which a service depends. All hardware fails at some point, so this must be dealt with.

    To handle these failures, Azure continuously monitors its systems to determine when a failure occurs. When a failure on a hot copy is detected, requests are routed to a cold copy; then, a new copy of the service or data is deployed onto available hardware in order to maintain 2 redundant cold copies. Redundant copies of a service are kept in different parts of a datacenter, so that they don't rely on a single point of failure. These independent parts of the data center are known as "Fault Domains" because a fault in one Fault Domain will not affect services in the other Fault Domains.

    As a result of these practices, Azure can guarantee a certain level of uptime for each of its paid services. The level is dependent on the service and is usually expressed in terms of percentage uptime. Azure guaranteed uptimes range from 99.5% to 99.99%. This guaranteed uptime percentage is known as a "Service Level Agreement" or "SLA"

    You can view the current uptime guarantee for each Azure service here.

    An uptime of 99.5% would be down a maximum of 1.83 days per year and an uptime of 99.99% would be down a maximum of 52.6 minutes per year.

    Azure guarantees this by agreeing to credit all or part of a customer's charges if the uptime target is not met in any given month. The exact credit amount depends how much the target is missed.

    As of this writing, here are the guaranteed uptimes for each Azure service.

    Service Uptime Notes
    Active Directory 99.90%
    Active Directory B2C 99.90%
    AD Domain Service 99.90%
    Analysis Service 99.90%
    API Management 99.90%
    App Service 99.50%
    Application Gateway 99.50%
    Application Insights 99.90%
    Automation 99.90%
    DevOps 99.90%
    Firewall 99.50%
    Front Door Service 99.99%
    Lab Services 99.90%
    Maps 99.90%
    Databricks 99.50%
    Backup 99.50%
    BizTalk Services 99.90%
    Bot Service 99.90%
    Cache 99.90%
    Cognitive Services 99.90%
    CDN 99.90%
    Cloud Services 99.50% Assumes at least 2 instances
    VMs 99.50% Assumes at least 2 instances
    VMs 99.90% Assumes Premium storage
    CosmosDB 99.99%
    Data Catalog 99.90%
    Data Explorer 99.90%
    Data Lake Analytics 99.90%
    Data Lake Storage Gen1 99.90%
    DDoS Protection 99.99%
    DNS 100.00%
    Event Grid 99.99%
    Event Hubs 99.90%
    ExpressRoute 99.50%
    Azure Functions 99.50%
    HockeyApp 99.90%
    HDInsight 99.90%
    IoT Central 99.90%
    IoT Hub 99.90%
    Key Valut 99.90%
    AKS 99.50%
    Log Analytics 99.90%
    Load Balancer 99.99%
    Logic Apps 99.90%
    ML Studio 99.95%
    Media Services 99.90%
    Mobile Services 99.90%
    Azure Monitor 99.90%
    Multi-Factor Authentication 99.90%
    MySQL 99.99%
    Network Watcher 99.90%
    PostgreSQL 99.99%
    Power BI Embedded 99.90%
    SAP HANA on Azure Large Instances 99.99%
    Scheduler 99.99%
    Azure Search 99.90%
    Security Center 99.90%
    Service Bus 99.90%
    SignalR Service 99.90%
    Site Recovery 99.90%
    SQL Database 99.99%
    SQL Data Warehouse 99.90%
    SQL Server Stretch Database 99.90%
    Storage Accounts 99.99% 99.9% for Cold Storage
    StorSimple 99.90%
    Stream Analytics 99.90%
    Time Series Insights 99.90%
    Traffic Manager 99.99%
    Virtual WAN 99.95%
    VS App Center 99.90%
    VPN Gateway 99.95%
    VPN Gateway for VPN or ExpressRoute 99.90%
    Information Protection 99.90%
    Win10 IoT Core Svcs 99.90%
    VMWare Solution 99.90%

    Services like Azure Backup and Azure Functions, which can be easily retried, have the lowest guaranteed uptime.

    The highest guaranteed uptimes are reserved for mission-critical services, such as DNS and Traffic Manager, along with all the database and storage offerings.

    Free services are not listed here, as they almost never have a guaranteed uptime. Even if they did, there is nothing to credit to the account.

    Azure has systems in place to guarantee high availability and reliability and Microsoft has enough confidence in those systems to guarantee a predictable level of uptime and base that guarantee on monetary credits.

    Wednesday, May 8, 2019 9:55:00 AM (GMT Daylight Time, UTC+01:00)
    # Monday, April 22, 2019

    Episode 560

    Frank Gill on Azure SQL Database Managed Instances

    DBA Frank Gill discusses Azure SQL Database Managed Instances - a cloud-based managed database service. He describes what they are, how they differ from Azure SQL Databases, and when it is appropriate to consider them.

    Links:

    https://skreebydba.com/
    https://twitter.com/skreebydba

    Monday, April 22, 2019 9:49:00 AM (GMT Daylight Time, UTC+01:00)
    # Monday, April 1, 2019

    Episode 557

    Brent Stineman on the Evolution of Serverless

    Brent Stineman describes Serverless cloud technologies and how they have evolved to make applications more flexible.

    Monday, April 1, 2019 9:22:00 AM (GMT Daylight Time, UTC+01:00)
    # Thursday, March 14, 2019

    GCast 39:

    Azure Search REST API

    Azure Search allows you to make your internal data searchable in the same way that search engines like Google and Bing make public information on the Internet searchable.

    Thursday, March 14, 2019 8:31:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, March 12, 2019

    In a previous article, we saw how to create an Azure IoT Hub.    

    In this article, we will show how to add devices to the IoT Hub.

    When I first began working with IoT hub devices, I was confused by language that suggested I was "Adding" or "Creating" a device. What we are really doing is registering a device with the hub, so that a physical device of the same name can communicate with this hub. When you see words like "Add" and "Create", think of the fact that it is adding and creating the registration entry.

    To begin, log into the Azure Portal and navigate to your IoT Hub, as shown in Fig. 1.

    id01-IotHubOverviewBlade
    Fig. 1

    Click "IoT devices" to open the "IoT devices" blade, as shown in Fig. 2.

    id02-IotDevicesBlade
    Fig. 2

    If this hub has any devices, you will see them listed. You can use the fields at the top to filter the list to more quickly find one or more devices.

    To add a new device, click the [Add] button (Fig. 2) to display the "Create a device" blade, as shown in Fig. 3.

    id03-AddDeviceButton
    Fig. 3

    id04-CreateADevice
    Fig. 4

    At the "Device ID", enter a name for this device. The name must be unique among this hub's devices.

    At the "Authentication type", select the type of authentication you wish this device to use. If you select "Symmetric key", you have the option to enter your keys or allow the system to generate keys for you.

    Click the [Save] button to create this device.

    After a few seconds, the device is created and displays in the device list of the "IoT devices" blade, as shown in Fig. 5.

    id05-IotDevicesBlade
    Fig. 5

    If you click on the device, you can see the "Device details" for this device, as showin in Fig. 6.

    id06-DeviceDetails
    Fig. 6

    The connection string is required to target this specific device.

    Now that you have a device registered, a device of that name can communicate with this hub.

    Azure | IoT
    Tuesday, March 12, 2019 9:48:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, March 8, 2019

    The Internet of Things, or IoT, allows you to capture data from devices across the planet and use the power of the cloud to store and manage that data.

    Microsoft Azure offers IoT Hubs as a way to capture data from Internet-connected devices.

    To create a new IoT hub, navigate to the Azure portal and log in.

    Click the [Create a resource] button (Fig. 1) and select Internet of Things | IoT hub from the menu, as shown in Fig. 2.

    ih01-CreateNew
    Fig. 1

    ih02-Menu
    Fig. 2

    The "IoT hub" blade displays, as shown in Fig. 3.

    ih03-IoTBlade
    Fig. 3

    At the "Subscription" field, select the subscription in which you want to store this hub. Many of you will have only one subscription and it will already be selected.

    At the "Resource Group" field, select a Resource Group in which to store this hub. You can create a new Resource Group by clicking the "Create new" link and entering a name for the new group, as shown in Fig. 4.

    ih04-NewResourceGroup
    Fig. 4

    At the "Region" field, select the geographic region in which you want your hub to be located. Considerations include the location of the devices that will connect to this hub and the location other systems with which the hub will interact.

    At the "IoT Hub Name" field, enter a unique name for this hub.

    After you have completed the form, click the [Review + create] button. A summary page displays, as shown in Fig. 5.

    ih05-iotHubConfirmation
    Fig. 5

    If any errors display, click the [Previous] button and correct them; Otherwise, click the [Create] button to create a new IoT Hub. It will take several minutes to deploy all the necessary resources and create this hub.

    After the hub is created, you can navigate to it, as showing in Fig. 6.

    ih06-IotHubOverviewBlade
    Fig. 6

    The "Overview" blade is selected by default and displays summary information about your hub, as well as links to documentation, so you can learn more about managing and working with this hub.

    In this article, you learned how to create a new Azure IoT hub. A hub requires more configuration to be useful. We will cover this configuration in a future article.

    Azure | IoT
    Friday, March 8, 2019 9:47:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, March 7, 2019

    GCast 38:

    Azure Search

    Azure Search allows you to make your internal data searchable in the same way that search engines like Google and Bing make public information on the Internet searchable.

    Thursday, March 7, 2019 9:50:00 AM (GMT Standard Time, UTC+00:00)
    # Wednesday, March 6, 2019

    The Internet of Things (or IoT) has revolutionized the way we think of computing.

    In the past, computers were self-contained, general purpose machines that could load complex operating systems, run multiple applications, and perform a wide variety of tasks. They could communicate with one another in order to either share data or distribute workloads.

    Now, tiny computers can be found in a huge number of devices around one's home or workplace. When these devices are connected to the cloud, they become far more powerful because much of the processing and storage traditionally done on the computer is moved to the massively-scalable cloud.

    At home, refrigerators, thermostats, and automobile contain computers that send and receive information, making them better able to adapt to the world around them.

    Businesses take advantage of devices connected to manufacturing machines or vehicles or weather detectors to monitor local conditions and productivity. Capturing data from these devices allows them to respond to anomalies in the data that may indicate a need for action. Imagine a monitor on a factory floor that monitors the health of an assembly line and sends an alert to a repair team if the line breaks down. Or, better still, if the data indicates a strong probability it will break down soon. Imagine a shipping company being able to track the exact location and health of every one of their trucks and to re-route them as necessary.

    Industries as disparate as transportation, clothing, farming, and healthcare have benefited from the IoT revolution.

    Cloud tools, such as Microsoft Azure IoT Hub allow businesses to capture data from many devices, store that data, analyze, and route it to a particular location or application. As applications become more complex, cloud tools become both more powerful and simpler to create.

    These tools offer things like real-time analytics, message routing, data storage, and automatic scalability.

    This IoT revolution has enabled companies to capture huge amounts of data. Tools like Machine Learning allow these same companies to find patterns in that data to facilitate things like predictive analysis.

    The cost of both hardware and cloud services has fallen dramatically, which has accelerated this trend.

    The trend shows no signs of slowing and companies continue to think of new ways to connect devices to the cloud and use the data collected.

    The next series of articles will explore how to process IoT data using the tools in Microsoft Azure.

    Wednesday, March 6, 2019 9:46:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, March 1, 2019

    Azure Search allows you to create a service making your own data searchable, in much the same way that public search engines like Google and Bing make data on the Internet searchable.

    In previous articles, I showed how to create an Azure Search Service; and how to import and index data in that service.

    In this article, I will show how to use a REST API exposed by the Azure Search service to return indexed results, based on search criteria.

    You can do some limited searching using the Azure portal. Navigate to the Azure portal and login; then, navigate to the Azure Search service, as shown in Fig. 1.

    as01-OverviewBlade
    Fig. 1

    Click the [Search explorer] button (Fig. 2) to display the "Search explorer" blade, as shown in Fig. 3.

    as02-SearchExplorerButton
    Fig. 2

    as03-SearchExplorerBlade
    Fig. 3

    At the "Query string" field, you can enter a search term and click the [Search] button to return all the data (in JSON format) that matches the search term in any field you marked "FILTERABLE" in your index. Clicking the [Search] button issues an HTTP GET against the Search service's REST API. The results are shown in Fig. 4.

    as04-SearchExplorerResults
    Fig. 4

    You have more flexibility calling the REST API with a POST request. This is not possible through the portal; but you can use a tool like Postman to make these requests.

    The URL to which you POST can be found on the Azure service's "Overview" tab, as shown in in Fig. 5.

    as05-Url
    Fig. 5

    The URL takes the form:

    https://<servicename>.search.windows.net

    where <servicename> is the name you assigned to this service.

    You will also need the Query key. You can find the Query key by opening the Azure Search service's "Keys" blade (Fig. 6) and clicking "Manage query keys" to display the "Manage query keys" blade, as shown in Fig. 7.

    as06-KeysBlade
    Fig. 6

    as07-ManageQueryKeys
    Fig. 7

    To test POSTing to the REST API, open Postman and open a new tab, as shown in Fig. 8.

    as08-Postman
    Fig. 8

    At the Verb dropdown, select "POST".

    At the "Request URL" field, paste in the URL. This will be the URL copied from the "Overview" tab, followed by "/indexes/<indexname>/docs/search?api-version=2017-11-11

    where <indexname> is the name of the index you created in the Azure Search service.

    This is shown in Fig. 9.

    as09-PostmanParamsTab
    Fig. 9

    Select the "Headers" tab, as shown in Fig. 10.

    as10-PostmanHeaderTab
    Fig. 10

    Enter the following 2 key/value pairs:

    Key="api-key"; value=the Query key copied from the service.

    Key="Content-Type"; value="application/json"

    These are shown in Fig. 11.

    as11-PostmanHeaderTab
    Fig. 11

    Select the "Body" tab to enter search parameters, as shown in Fig. 12.

    as12-PostmanBodyTab
    Fig. 12

    The example shown
    {
       "select": "*",
      "filter": "state eq 'IL'",
      "orderby": "presentationDate desc"
    }

    instructs the API to select all the fields' to filter the data, returning only those in which the "state" field equals "IL"; and sort the results in descending order by presentation date.

    The results are shown in Fig. 13.

    as13-PostmanResults
    Fig. 13

    In this article, you learned how to use the REST API to access an Azure Search service.

    Click  the [Send] button to POST to the API.

    Friday, March 1, 2019 9:27:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, February 28, 2019

    GCast 37:

    Managing Blobs with the Azure Storage Explorer

    The Azure Storage Explorer is a free resource to manage Azure Storage Accounts.
    This video shows how to manage Azure blobs with this tool.

    Thursday, February 28, 2019 8:55:00 AM (GMT Standard Time, UTC+00:00)
    # Wednesday, February 27, 2019

    Azure Search allows you to create a service making your own data searchable, in much the same way that public search engines like Google and Bing make data on the Internet searchable.

    There are three steps to configuring Azure Search:

    1. Create Azure Search Service
    2. Create Index
    3. Import data

    In a previous article, I showed how to create an Azure Search Service.

    This article will show how to import data into Azure Search service; then index that data.

    Navigate to the Azure portal and log in.

    For this demo, I am indexing a Table in Azure storage containing information about my public speaking events, as shown in the Azure Data Explorer in Fig. 1.

    as01-TableData
    Fig. 1

    Open your Azure Search Service, as shown in Fig. 2.

    as02-OverviewBlade
    Fig. 2

    Click the [Import data] button (Fig. 3) to display the "Import data" blade, as shown in in Fig. 4.

    as03-ImportDataButton
    Fig. 3

    as04-ImportDataBlade
    Fig. 4

    At the "Data Source" dropdown, select "Azure Table Storage", as shown in in Fig. 5.

    as05-DataSourceTableStorage
    Fig. 5

    The "Connect your data" tab displays, as shown in Fig. 6.

    as06-ConnectYourData
    Fig. 6

    At the "Name" field, enter a name for this data source.

    At the "Connection string" field, click "Choose an existing connection" and select the storage account containing your data, as shown in Fig. 7.

    as07-ChooseConnection
    Fig. 7

    At the "Table name" field, enter the name of the table containing your data.

    Click the [Next] buttons at the bottom of the tab until you advance to the "Customize target index" tab, as shown in Fig. 8.

    as08-CustomizeTargetIndex
    Fig. 8

    This tab displays all the fields in your data. Here you can select which fields can be retrieved, which can be filtered on, which can be sorted on, etc.

    After making all your selections, click the [Next: Create an indexer] button at the bottom of the tab to advance to the "Create an indexer" tab, as shown in Fig. 9.

    as09-CreateIndexer
    Fig. 9

    On this tab, you can configure how often your index will be updated from data changes. You can also decide whether to remove deleted items from your index (which will slow down indexing).

    Click the [Submit] button to begin the first indexing and set the indexing schedule as configured.

    A few minutes after the indexer runs, you should see the DOCUMENT COUNT and STORAGE SIZE values in the "Indexes" tab of the Search Service's "Overview" blade, as shown in Fig. 10.

    as10-IndexesTab
    Fig. 10

    In this article, I showed how to import data into an Azure Search Service; then index that data.

    In a future article, I will show how to call the search service.

    Wednesday, February 27, 2019 9:21:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, February 26, 2019

    Azure Search allows you to create a service making your own data searchable, in much the same way that public search engines like Google and Bing make data on the Internet searchable.

    Before you can begin using Azure Search, you must perform the following actions:

    1. Create Azure Search Service
    2. Create Index
    3. Import data
    4. Index the data

    This article will show how to create an Azure Search Service.

    Navigate to the Azure portal and log in.

    Click the [Create a Resource] button (Fig. 1) to display a list of Azure resource categories.

    as01-CreateResourceButton
    Fig. 1

    At the Search box, enter "Azure Search" and press Enter, as shown in Fig. 2.

    as02-SearchAzureSearch
    Fig. 2

    From the list of matching services, click on "Azure Search", as shown in Fig. 3.

    as03-SelectAzureSearch
    Fig. 3

    A blade describing the features of Azure search displays, as shown in Fig. 4.

    as03-SelectAzureSearch
    Fig. 4

    Click the [Create] button at the bottom of this blade.

    The "New Search Service" blade displays, as shown in Fig. 5.

    as04-CreateAzureSearch
    Fig. 5

    At the "URL" field, enter a unique name for this service. The service will expose a REST endpoint with the URL: https://xxxx.search.windows.net, where xxxx is the name you enter here.

    At the "Subscription" dropdown, select the Subscription in which you want to store this service.

    At the "Resource group" dropdown, select the resource group in which to store this service or click the "Create new" link to add a new resource group, as shown in Fig. 6.

    as06-NewResourceGroup
    Fig. 6

    At the "Location" dropdown, select the region in which you want to store this service. The region should be near the users of the service or near the data you intend to index.

    At the "Pricing tier" field, select an appropriate pricing tier. Clicking this field expands the "Choose your pricing tier" blade (Fig. 7), which displays the approximately monthly cost and the features of each tier.

    as07-PricingTier
    Fig. 7

    When you have completed all the fields in the "New Search Service" blade, click the [Create] button to create the service.

    When the service is created, you can navigate to it, as shown in Fig. 8.

    as08-OverviewBlade
    Fig. 8

    This article showed how to create a new Azure Search Service. In the next article, we will create an Index for this service.

    Tuesday, February 26, 2019 9:08:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, February 21, 2019

    GCast 36:

    Managing Tables with the Azure Storage Explorer

    The Azure Storage Explorer is a free resource to manage Azure Storage Accounts.
    This video shows how to manage Azure tables with this tool.

    Azure | GCast | Screencast | Video
    Thursday, February 21, 2019 9:44:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, January 3, 2019

    GCast 29:

    Introducing Cognitive Services and Computer Vision

    Microsoft Cognitive Services allow you to take advantage of Machine Learning without all the complexities of Machine Learning. In this video, I introduce Cognitive Services by showing how to use Computer Vision to analyze an image, automatically detecting properties of that image.

    Thursday, January 3, 2019 12:53:21 PM (GMT Standard Time, UTC+00:00)
    # Monday, December 31, 2018

    Episode 544

    Elizabeth Graham on Azure Logic Apps

    Microsoft Global Black Belt Elizabeth Graham describes Azure Logic Apps and how to use them to solve integration and workflow projects.

    Monday, December 31, 2018 9:06:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, December 27, 2018

    GCast 28:

    Natural Language Processing with LUIS

    Learn how to use Microsoft Language Understanding Information Service (LUIS) to build models that provide Natural Language Processing (NLP) for your application.

    Thursday, December 27, 2018 9:53:00 AM (GMT Standard Time, UTC+00:00)
    # Monday, December 24, 2018

    Episode 543

    Alex Mang on Azure Durable Functions

    Alex Mang describes Azure Durable Functions and some real-world examples of how he uses them.

    Monday, December 24, 2018 9:42:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, December 20, 2018
     #
     

    GCast 27:

    QnAMaker

    Learn how to use QnA Maker to create a bot that automatically answers questions.

    Azure | Bots | GCast | Screencast
    Thursday, December 20, 2018 9:26:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, December 13, 2018

    GCast 26:

    Creating a Chatbot in the Azure Portal

    In this video, I show how to create, deploy, and edit a chatbot completely within your web browser using the Azure Portal. You can event download the source code and edit it in Visual Studio, if you wish.

    Thursday, December 13, 2018 9:19:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, November 30, 2018

    Given an Azure Function, you may wish to change the URL that points to this function. There are several reasons to do this:

    1. Make the URL simpler
    2. Make the URL more readable
    3. Make the URL conform to your organization's standards

    To reassign a Function's URL, you will need to know the existing URL. To find this, select the Function and click the "Get function URL" link, as shown in Fig. 1.

    FP01-GetFunctionUrl
    Fig. 1

    The Function URL dialog displays, as shown in Fig. 2.

    FP02-FunctionUrl
    Fig. 2

    Click the [Copy] icon to copy this URL to your clipboard. You may wish to paste this into a text document or another safe place for later use.

    Each Azure Function App contains a "Proxies" section, as shown in Fig. 3.

    FP03-Proxies
    Fig. 3

    Click the [+] icon to display the "New proxy" blade, as shown in Fig. 4.

    FP04-Proxy
    Fig. 4

    At the "Name" field, enter a name to identify this proxy. I like to include the name of the original function in this name, to make it easy to track to its source.

    At the "Route template" field, enter a template for the new URL. This is everything after the "https://" and the domain name. If the function accepts parameters, you will need to add these and surround them with curly brackets: "{" and "}".

    At the "Allowed HTTP methods" dropdown, select "All methods" or check only those methods you wish your new URL to support.

    At the "Backend URL" field, enter the full original URL copied earlier to your clipboard. If the function accepts parameters, you will need to add these and surround them with curly brackets: "{" and "}". The parameter name here must match the parameter name in the "Route template" field.

    An example

    For example, if I created a Function with an HTTPTrigger and accepted all the defaults (as described here), you will have a function that accepts a querystring parameter of "name" and outputs "Hello, " followed by the value of name.

    My original function URL looked similar to the following:

    https://dgtestfa.azurewebsites.net/api/HttpTrigger1?code=idLURPj58mZrDdkAh9LkTkkz2JZRmp6/ru/DQ5RbotDpCtg/WY/pRw==

    So, I entered the following values into the "New Proxy" blade:

    Name: HttpTrigger1Proxy
    Route template: welcome/{name}
    Allowed HTTP methods: All methods
    Backend URL: https://dgtestfa.azurewebsites.net/api/HttpTrigger1?code=idLURPj58mZrDdkAh9LkTkkz2JZRmp6/ru/DQ5RbotDpCtg/WY/pRw==&name={name}

    With these settings, I can send a GET or POST request to the following url:

    https://dgtestfa.azurewebsites.net/welcome/David

    and receive the expected response:

    Hello, David

    This new URL is much simpler and easier to remember than the original one.

    In this article, I showed you how to create a proxy that redirects from a new URL to an existing Azure Function.

    Friday, November 30, 2018 9:43:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, November 29, 2018

    GCast 24:

    Azure Function CosmosDB Binding

    Using the CosmosDB binding in an Azure Function allows you to read and write documents in an Azure CosmosDB database without writing code.

    Thursday, November 29, 2018 9:22:00 AM (GMT Standard Time, UTC+00:00)
    # Wednesday, November 28, 2018

    Setting up continuous deployment of an Azure Function from GitHub  is straightforward.

    In this article, I already had an Azure Function (created using Visual Studio) in a GitHub  repository and an empty Azure Function App.

    See this article for information on GitHub

    See this article to learn how to create an Azure Function App.

    Open the Azure Function App in the Azure portal, as shown in Fig. 1.

    DF01-FunctionApp-big
    Fig. 1

    Click the "Platform features" link (Fig. 2) to display the "Platform features" page, as shown in Fig. 3.

    DF02-PlatformFeaturesLink
    Fig. 2

    DF03-PlatformFeatures
    Fig. 3

    Under "Code Deployment", click the "Deployment Center" link to open the "Deployment Center" page, as shown in Fig. 4.

    DF04-DeploymentCenter
    Fig. 4

    On the "Deployment Center" page, select the "GitHub " tile and click the [Continue] button, as shown in Fig. 5.

    DF05-DeploymentCenterContinue
    Fig. 5

    The wizard advances to the "Configure" page of the "Deployment Center" wizard, as shown in Fig. 6.

    DF06-ConfigureDeployment
    Fig. 6

    At the "Organization" dropdown, select the GitHub  account where your code resides. If you don't see the account, you may need to give your Azure account permission to view your GitHub  repository.

    At the "Repository" dropdown, select the code repository containing your Azure Functions.

    At the "Branch" dropdown, select the code branch you wish to deploy whenever a change is pushed to the repository. I almost always select "master" for this.

    Click the [Continue] button to advance to the "Summary" page of the "Deployment Center" wizard, as shown in Fig. 7.

    DF07-Summary
    Fig. 7

    On the "Summary" page, review your choices and click the [Finish] button if they are correct. (If they are not correct, click the [Back] button and make the necessary corrections.

    In a few minutes, the function or functions in your repository will appear under your Function App in the Azure portal, as shown in Fig. 8.

    DF08-Function
    Fig. 8

    Any future changes pushed to the repository will automatically be added to the Function App.

    For example, I can open my Visual Studio project and add a second function, as shown in Fig. 9

    DF09-AddNewFunction
    Fig. 9

    After testing the change, I can push it to my GitHub  repository with the following commands:

    git add .
    git commit -m "Added a new function"
    git push origin master

    Listing 1

    Because a webhook was added to my GitHub  repository, this change will be pushed to my Azure Function App. Fig. 10 shows the Function app a few minutes after I pushed my change to GitHub .

    DF10-FunctionAppAfterPush
    Fig. 10

    In this article, you learned how to configure continuous deployment of your Azure Function App from a GitHub repository.

    Wednesday, November 28, 2018 8:33:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, November 27, 2018

    In a recent article, I showed how to create a Durable Azure Function. If you are unfamiliar with Durable Functions, I recommend you read that article first.

    In that article, the Durable Function called 3 Activity Functions in sequence. No Function executed until the Function before it completed. Sometimes, it is important that Functions execute in a certain order. But sometimes it does not matter in which order a Function executes - only that they each complete successfully before another Activity Function is called. In these cases, executing sequentially is a waste of time. It is more efficient to execute these Azure Functions in parallel.

    In this article, I will show how to create a durable function that executes three Activity Functions in parallel; then waits for all 3 to complete before executing a fourth function.
     
    Fig. 1 illustrates this pattern.

    PD01-ParallelDurableFunctionFlow
    Fig. 1
     
    As we noted in the earlier article, a Durable function is triggered by a starter function, which is in turn triggered by an HTTP request, database change, timer, or any of the many triggers supported by Azure Functions, as shown in Fig. 2.

    PD02-DurableFunctionTrigger
    Fig. 2

    I created 4 Activity Functions that do nothing more than write a couple messages to the log (I use LogWarning, because it causes the text to display in yellow, making it easier to find); delay a few seconds (to simulate a long-running task); and return a string consisting of the input string, concatenated with the name of the current function. The functions are nearly identical: Only the Function Name, the message, and the length of delay are different.

    The 4 functions are shown below:

        public static class Function1
         {
             [FunctionName("Function1")]
             public static async Task<string> Run(
                 [ActivityTrigger] string msg,
                 ILogger log)
             {
                 log.LogWarning("This is Function 1");
                 await Task.Delay(15000);
                 log.LogWarning("Function 1 completed");
                 msg += "Function 1";
                return msg;
            }
        }
      

    Listing 1

        public static class Function2 
        { 
            [FunctionName("Function2")] 
            public static async Task<string> Run( 
                [ActivityTrigger] string msg, 
                ILogger log) 
            { 
                 log.LogWarning("This is Function 2"); 
                await Task.Delay(10000); 
                log.LogWarning("Function 2 completed"); 
                msg += "Function 2"; 
                return msg; 
            } 
        }
      

    Listing 2

        public static class Function3 
         { 
            [FunctionName("Function3")] 
            public static async Task<string> Run( 
                [ActivityTrigger] string msg, 
                ILogger log) 
            { 
                log.LogWarning("This is Function 3"); 
                await Task.Delay(5000); 
                 log.LogWarning("Function 3 completed"); 
                msg += "Function 3"; 
                return msg; 
            } 
        }
      

    Listing 3

        public static class Function4 
        { 
            [FunctionName("Function4")] 
            public static async Task<string> Run( 
                 [ActivityTrigger] string msg, 
                ILogger log) 
             { 
                log.LogWarning("This is Function 4"); 
                 int secondsDelay = new Random().Next(8, 12); 
                await Task.Delay(1000); 
                log.LogInformation("Function 4 completed"); 
                msg += "\n\rFunction 4"; 
                return msg; 
            } 
        }
      

    Listing 4

    We use the Parallel Task library to launch the first 3 functions and have them run in parallel; then, wait until each of the first 3 complete before executing the 4th Activity Function.

    Listing 5 shows this code in our Durable Orchestration function.

        public static class DurableFunction1 
        { 
            [FunctionName("DurableFunction1")] 
             public static async Task<IActionResult> Run( 
                [OrchestrationTrigger] DurableOrchestrationContext ctx, 
                ILogger log) 
            { 
                var msg = "Durable Function: "; 
                var parallelTasks = new List<Task<string>>(); 
                 Task<string> task1 = ctx.CallActivityAsync<string>("Function1", msg); 
                parallelTasks.Add(task1); 
                Task<string> task2 = ctx.CallActivityAsync<string>("Function2", msg); 
                parallelTasks.Add(task2); 
                Task<string> task3 = ctx.CallActivityAsync<string>("Function3", msg); 
                 parallelTasks.Add(task3);
    
                await Task.WhenAll(parallelTasks);
    
                // All 3 Activity functions finished 
                msg = task1.Result + "\n\r" + task2.Result + "\n\r" + task3.Result;
    
                // Use LogWarning, so it shows up in Yellow, making it easier to spot 
                log.LogWarning($"All 3 Activity functions completed for orchestration {ctx.InstanceId}!");
    
                msg = await ctx.CallActivityAsync<string>("Function4", msg); 
                log.LogWarning(msg);
    
                return new OkObjectResult(msg); 
            } 
        }
      

    Listing 5

    We create a new List of Tasks and add each activity to that list:

    var msg = "Durable Function: ";
    var parallelTasks = new List<Task<string>>();
    Task<string> task1 = ctx.CallActivityAsync<string>("Function1", msg);
    parallelTasks.Add(task1);
    Task<string> task2 = ctx.CallActivityAsync<string>("Function2", msg);
    parallelTasks.Add(task2);
    Task<string> task3 = ctx.CallActivityAsync<string>("Function3", msg);
    parallelTasks.Add(task3);

    The following line tells the system to wait until all 3 tasks in that list are completed.

    await Task.WhenAll(parallelTasks);

    When all 3 tasks complete, we resume the program flow, calling the 4th Activity and logging the output:

    log.LogWarning($"All 3 Activity functions completed for orchestration {ctx.InstanceId}!");
    msg = await ctx.CallActivityAsync<string>("Function4", msg);
    log.LogWarning(msg);

    As in the previous article, we launch this Durable Orchestration Function with a starter function (in this case a function with an HTTP trigger), as shown in Listing 6 below.

        public static class StarterFunction1 
        { 
            [FunctionName("StarterFunction1")] 
            public static async Task<HttpResponseMessage> Run( 
                [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] 
                HttpRequestMessage req, 
                [OrchestrationClient] DurableOrchestrationClient starter, 
                TraceWriter log) 
            { 
                 log.Info("About to start orchestration");
    
                var orchestrationId = await starter.StartNewAsync("DurableFunction1", log); 
                return starter.CreateCheckStatusResponse(req, orchestrationId); 
            } 
        }
      

    Testing the Orchestration

    We can test this orchestration by running the solution, which displays the HTTP Trigger URL, as shown in Fig. 3

    PD003-StartFunction
    Fig. 3

    We can then open a browser, type the HTTP Trigger URL in the address bar, and press [ENTER] to trigger the function, as shown in Fig. 4

    PD004-TriggerFunction
    Fig. 4

    Switch back to the function output to view the messages as they scroll past. You should see output from each of the first 3 functions (although not necessarily in the order called), followed by a message indicating the first 3 are complete; then output from Function 4. This is shown in Fig. 5.

    PD005-FinalOutput
    Fig. 5

    You can view this project under “Durable Functions” in this GitHub repository.

    In this article, I showed how to create a Durable Orchestration Function that launches activity functions that run in parallel.

    Tuesday, November 27, 2018 7:29:00 AM (GMT Standard Time, UTC+00:00)
    # Monday, November 26, 2018

    Episode 539

    Brady Gaster on Marketing Azure

    Brady Gaster helps to build and coordinate many of the Azure demos you see on stage at large technical conferences. He talks about how his team tells a story with tools and code.

    Monday, November 26, 2018 7:22:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, November 23, 2018

    Azure Functions provide a simple way to deploy code in a scalable, cost-effective way.

    By default, Azure Functions are stateless, which makes it difficult to create complex workflows with basic Azure functions - particularly long-running workflows, such as those that require human interaction.

    A Durable Azure Function maintains state for a long time, without having to stay in memory, making it ideal for orchestrations. Stateful information is stored in an Azure Storage Account when the the process terminates. This saves you money, because the default pricing model for Azure functions only charges you while the function is running.

    A Durable Function is not triggered in the same way as other Azure Functions (via HTTP, queue, database changes, timer, etc.) Rather, it is called from a "starter" function, which can be triggered in the usual way.

    Rather than placing all logic within a single Durable Function, it usually makes more sense to split tasks into individual Activity Functions and have the Durable Function manage these. The most simple Durable Function would simply call multiple activities in sequence. A diagram of this is shown in Fig. 1.

    DF01-DurableFunctionFlow
    Fig. 1

    You can create a Function App for an Azure Durable function in Visual Studio in the same way you create any function - by selecting File | New Project from the menu and selecting "Azure Functions" from the Project Templates dialog, as shown in Fig. 2.

    DF02-NewFunctionProject
    Fig. 2

    Select "Azure Functions v2" from the top dropdown and HttpTrigger" from the list of templates, as shown in Fig. 3; then, click the [OK] button to create the solution and project.

    DF03-FunctionTemplate
    Fig. 3

    The new project contains a function named "Function1". Right-click this function in the Solution Explorer and rename it to "StarterFunction", as shown in Fig. 4.

    DF04-RenameFunction
    Fig. 4

    Open StarterFunction.cs and change the first line of the class from

    [FunctionName("Function1")]

    to

    [FunctionName("StarterFunction")]

    Now, you can add a Durable Function to the project. Right-click the project in the Solution Explorer and select Add | New Azure Function from the context menu, as shown in Fig. 5.

    DF05-AddNewAzureFunction
    Fig. 5

    Name the new function "DurableFunction1", as shown in Fig. 6.

    DF06-AddDurableFunction
    Fig. 6

    At the next dialog, select "Durable Function Orchestration" from the list of triggers and click the [OK] button to create the function, as shown in Fig. 7.

    DF07-DurableFunctionsOrchestration
    Fig. 7

    This Durable Function will manage 3 functions, calling each one sequentially. To the project, add 3 new functions named "Function1", "Function2", and "Function3". It does not matter which trigger you choose, because we are going to overwrite the trigger. Paste the code below into each function:

        public static class Function1 
        { 
            [FunctionName("Function1")] 
            public static async Task<string> Run( 
                [ActivityTrigger] string msg, 
                ILogger log) 
            { 
                log.LogWarning("This is Function 1");
    
                await Task.Delay(10000); 
                msg += "Function1 done; "; 
                return msg; 
            } 
        }
      

    Listing 1

        public static class Function2 
        { 
            [FunctionName("Function2")] 
            public static async Task<string> Run( 
                 [ActivityTrigger] string msg, 
                ILogger log) 
            { 
                log.LogWarning("This is Function 2");
    
                await Task.Delay(10000); 
                msg += "Function2 done; "; 
                return msg; 
            } 
        }
      

    Listing 2

        public static class Function3 
        { 
            [FunctionName("Function3")] 
            public static async Task<string> Run( 
                [ActivityTrigger] string msg, 
                ILogger log) 
            { 
                log.LogWarning("This is Function 3");
    
                await Task.Delay(10000); 
                msg += "Function3 done; "; 
                return msg; 
            } 
        }
      

    Listing 3

    As you can see, each function essentially does the same thing: log a brief message; wait 10 seconds; then, return a string consisting of the string passed in with a bit more appended to the end.

    Notice also that the "msg" parameter in each function is decorated with the [ActivityTrigger] attribute, which is what makes each of these an Activity Function.

    The Task.Delay() simulates a long-running activity. Imagine an activity that requires human input, such as a manager navigating to a web page and filling out a form. It might take days or weeks for this to happen. We certainly would not the application to continue running during this time: This would be an inefficient use of resources and it would be expensive. Durable functions handle this by storing state information in Azure storage; then retrieving that state when the function needs to resume.

    Return to the DurableFunction1 class and replace the code with the following:

        public static class DurableFunction1 
        { 
            [FunctionName("DurableFunction1")] 
            public static async Task<IActionResult> Run( 
                [OrchestrationTrigger] DurableOrchestrationContext ctx, 
                ILogger log) 
            { 
                var msg = "Durable Function: "; 
                 msg = await ctx.CallActivityAsync<string>("Function1", msg); 
                msg = await ctx.CallActivityAsync<string>("Function2", msg); 
                msg = await ctx.CallActivityAsync<string>("Function3", msg);
    
                // Use LogWarning, so it shows up in Yellow, making it easier to spot 
                log.LogWarning(msg);
    
                return new OkObjectResult(msg); 
            } 
        }
      

    Listing 4

    You will probably have to add the following to the top of the file in order for it to compile:

    using Microsoft.AspNetCore.Mvc;

    In Listing 4, we see that the Durable Function calls the 3 Activity functions in order. It passes to each Activity Function the output of the previous function. At then end of the orchestration, we expect to see a concatenation of messages from each of the 3 Activity Functions.

    Notice also the parameter of type DurableOrchestrationContext, which is decorated with the [OrchestrationTrigger] attribute. This identifies this as a Durable Orchestration Function.

    Finally, return to the StarterFunction class and replace the code with the following:

        public static class StarterFunction
        {
            [FunctionName("StarterFunction")]
            public static async Task<HttpResponseMessage> Run(
                [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
                HttpRequestMessage req,
                [OrchestrationClient] DurableOrchestrationClient starter,
                ILogger log)
            {
                log.LogInformation("About to start orchestration");
    
                var orchestrationId = await starter.StartNewAsync("DurableFunction1", log);
                return starter.CreateCheckStatusResponse(req, orchestrationId);
            }
        }
      

    Listing 5

    To see this in action, compile and run the project. A console will display similar to the one in Fig. 8.

    DF08-RunFunction
    Fig. 8.

    You can trigger the StarterFunction by issuing an HTTP GET to the URL displayed in the console (in this case http://localhost:7071/api/StarterFunction). Open a browser, enter this URL into the address bar, and press [ENTER].

    Watch the console. You should see the log statements in each of the functions display in turn. Finally, we will see the final value of the msg variable after being passed to all 3 Activity functions. The output should look something like fig. 9.

    DF09-FunctionComplete
    Fig. 9

    This illustrates the concepts of a Durable Orchestration Function. You can view the source code in the SequentialDurableFunctionDemo project at my Azure-Function-Demos GitHub repository.

    Friday, November 23, 2018 9:23:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, November 22, 2018

    GCast 23:

    Azure Logic Apps

    Learn how to create a Logic App to deploy a workflow in the cloud.

    Thursday, November 22, 2018 9:12:00 AM (GMT Standard Time, UTC+00:00)
    # Wednesday, November 21, 2018

    Azure Functions allow you to declaratively add bindings to external resources by decorating a C# function with binding attributes.

    This means you need to write less code and the code you do write will focus more on your business logic than on updating resources.

    In this article, I will show you how to add CosmosDB bindings to an Azure function in order to read from and write to a CosmosDB database.

    Create an Configure CosmosDB database and collection

    See this article to learn how to create a new CosmosDB instance.

    Next create a Database and Collection within your CosmosDB. This article describes how to create a CosmosDB Database and Collection; or you can quickly create a Database named "ToDoList" and a Collection named "Items" from the "Quick Start" tab of the CosmosDB database you created, as shown in Fig. 1.

    CD01-QuickStart
    Fig. 1

    As you work with data in this database, you can view the documents on the "Data Explorer" tab, as shown in Fig. 2.

    CD02-DataExplorer
    Fig. 2

    You will need the Connection String of your CosmosDB. You can find two connection strings on the "Keys" tab, as shown in Fig. 3. Copy either one and save it for later.

    CD03-Keys
    Fig. 3

    Visual Studio project

    Create a function in Visual Studio 2017. If you base it on the "Azure Functions" template (Fig. 4), it will have many of the necessary references.

    CD04-NewAzureFunctionApp
    Fig. 4

    Open the local.settings.json file and add a key for "CosmosDBConnection", as shown in Fig. 5. Set its value to the connection string you copied from the "Keys" blade above.

    CD05-localsettingsjson
    Fig. 5

    Delete the existing Function1.cs file from the project and add a new function by right-clicking the project in the Solution Explorer and selecting Add | New Function from the context menu, as shown in Fig. 6. Give the function a meaningful name.

    CD06-AddFunction
    Fig. 6

    Repeat this for any function you wish to add.

    Create a model of the expected data

    CosmosDB is a schemaless document database, meaning that the database engine does not enforce the type of data it accepts. This is distinct from something like SQL Server, which requires you to define in advance the name, data type, and rules of each column you expect to store.

    If you want to validate data, you must do so in your application. One way to do this is to create a Model class that matches the expected incoming data.

    In my demo, I expect only to store data that looks like the following:

    {
    "id" : "001",
    "description" : "Write blog post",
    "isComplete" : false
    }
      

    So I created the ToDoItem class shown in Listing 1

    public class ToDoItem 
     { 
        [JsonProperty("id")] 
        public string Id { get; set; }
    
        [JsonProperty("description")] 
        public string Description { get; set; }
    
        [JsonProperty("isComplete")] 
        public bool IsComplete { get; set; } 
     }
      

    Listing 1

    Insert a document

    The code below generates a function to insert a new document into a database. The function is triggered when you send an HTTP POST request to the function's URL (in this case, "api/InserToDoItem). The document will have the value of the JSON

    [FunctionName("InsertItem")] 
    public static HttpResponseMessage Run( 
        [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req, 
        [CosmosDB( 
            databaseName: "ToDoList", 
            collectionName: "Items", 
            ConnectionStringSetting = "CosmosDBConnection")] 
        out ToDoItem document, 
        ILogger log) 
    { 
        var content = req.Content; 
        string jsonContent = content.ReadAsStringAsync().Result; 
        document = JsonConvert.DeserializeObject<ToDoItem>(jsonContent);
    
        log.LogInformation($"C# Queue trigger function inserted one row");
    
        return new HttpResponseMessage(HttpStatusCode.Created); 
    }
      

    Let's walk through the function.

    [FunctionName("InsertItem")]

    The name of the function is InsertItem

    public static HttpResponseMessage Run(

    The Run method executes when the function is triggered. It returns an HTTP Response Message

    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req,

    The first parameter is the incoming HTTP Request. It is decorated with HttpTrigger, indicating this is an HTTP trigger. Within this decorator's parameters, we indicate that the function can be called anonymously, that it can only be called with an HTTP POST (not GET or PUT or any other verb); and that we are not changing the default routing.

    [CosmosDB(
          databaseName: "ToDoList",
          collectionName: "Items",
          ConnectionStringSetting = "CosmosDBConnection")]
         out ToDoItem document,        

    The second parameter is an output parameter of type ToDoItem. We will populate this with the data in the Request body, so we type it as a ToDoItem. This parameter is decorated with the CosmosDB attribute, indicating that we will automatically insert this document into the CosmosDB. The databaseName, collectionName, and ConnectionStringSetting tell the function exactly where to store the document. The ConnectionStringSetting argument must match the name  we added     for the connection string in the local.settings.json file, as described above.

    ILogger log)

    The logger allows us to log information at points in the function, which can be helpful when troubleshooting and debugging.

    var content = req.Content;
    string jsonContent = content.ReadAsStringAsync().Result;
    document = JsonConvert.DeserializeObject<ToDoItem>(jsonContent);

    The 3 lines above retrieve the body in the HTTP POST request and convert it to a .NET object of type ToDoItem, which validates that it is the correct format.

    log.LogInformation($"C# Queue trigger function inserted one row");

    This line is not necessary, but may help us to understand what part of the function executed when we are troubleshooting.

    return new HttpResponseMessage(HttpStatusCode.Created);

    When the document is successfully inserted, we return an HTTP 201 (Created) status to indicate success.

    Retrieve all documents

    The following function retrieves all the documents in a container.

        public static class GetItems
        {
            [FunctionName("GetItems")]
            public static async Task<IActionResult> Run(
                [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
                [CosmosDB(
                    databaseName: "ToDoList",
                    collectionName: "Items",
                    ConnectionStringSetting = "CosmosDBConnection",
                    SqlQuery = "select * from Items")
                ]IEnumerable<ToDoItem> toDoItems,
                ILogger log)
            {
                log.LogInformation($"Function triggered");
    
                if (toDoItems == null)
                {
                    log.LogInformation($"No Todo items found");
                }
                else
                {
                    var ltodoitems = (List<ToDoItem>)toDoItems;
                    if (ltodoitems.Count == 0)
                    {
                        log.LogInformation($"No Todo items found");
                    }
                    else
                    {
                        log.LogInformation($"{ltodoitems.Count} Todo items found");
                    }
                }
    
                return new OkObjectResult(toDoItems);
            }
        }
      

    Breaking down this function:

    [FunctionName("GetItems")]        

    The name of the function is “GetItems”.

    public static async Task<IActionResult> Run(

    The Run method executes when the function is triggered. This method is asynchronous and will eventually return an ActionResult.

    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req,

    The first parameter is the incoming HTTP Request. It is decorated with HttpTrigger, indicating this is an HTTP trigger. Within this decorator's parameters, we indicate that the function can be called anonymously, that it can only be called with an HTTP GET; and that we are not changing the default routing.

    [CosmosDB(
    databaseName: "ToDoList",
    collectionName: "Items",
    ConnectionStringSetting = "CosmosDBConnection",
    SqlQuery = "select * from Items") ]IEnumerable<ToDoItem> toDoItems,

    This parameter is what will be returned by the function (eventually, because it runs asynchronously). It is a list of objects of type ToDoItem. When serialized, this will be transformed into an array of JSON objects. This parameter is decorated with the CosmosDB attribute, indicating that we will automatically retrieve the list from the CosmosDB. The databaseName, collectionName, and ConnectionStringSetting tell the function exactly where to store the document. The SQlQuery tells what query to run to retrieve the data (in this case, return all the rows)

    ILogger log)

    The logger allows us to log information at points in the function, which can be helpful when troubleshooting and debugging.

    log.LogInformation($"Function triggered");
    if (toDoItems == null)
        {
             log.LogInformation($"No Todo items found");
        }
        else
        {
             var ltodoitems = (List<ToDoItem>)toDoItems;
             if (ltodoitems.Count == 0)
            {
                log.LogInformation($"No Todo items found");
            }
            else
            {
                 log.LogInformation($"{ltodoitems.Count} Todo items found");
             }
        }

    We did not need to write code to query the database. This happens automatically. The code above simply verifies that items were returned and transforms them into  List<ToDoItem> and stores this list in a local variable.

    return new OkObjectResult(toDoItems);

    We return a 200 (“OK”) HTTP response and the list of items.

    Retrieve a single document by its ID

    The following function retrieves a single document, given the ID.

        public static class GetItemById
        {
            [FunctionName("GetItemById")]
                public static async Task<IActionResult> Run(
                [HttpTrigger(AuthorizationLevel.Function, "get", Route = "GetItem/{id}")] HttpRequestMessage req,
                [CosmosDB(
                    databaseName: "ToDoList",
                    collectionName: "Items",
                    ConnectionStringSetting = "CosmosDBConnection",
                    Id = "{id}")
                ]ToDoItem toDoItem,
                ILogger log)
            {
                log.LogInformation($"Function triggered");
    
                if (toDoItem == null)
                {
                    log.LogInformation($"Item not found");
                    return new NotFoundObjectResult("Id not found in collection");
                }
                else
                {
                    log.LogInformation($"Found ToDo item {toDoItem.Description}");
                    return new OkObjectResult(toDoItem);
                }
    
            }
        }
      

    Here are the details of this function:

    [FunctionName("GetItemById")]        

    The name of the function is “GetItemById”

    public static async Task<IActionResult> Run(

    The Run method executes when the function is triggered. This method is asynchronous and will eventually return an ActionResult.

    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req,

    The first parameter is the incoming HTTP Request. It is decorated with HttpTrigger, indicating this is an HTTP trigger. Within this decorator's parameters, we indicate that the function can be called anonymously, that it can only be called with an HTTP GET; and that we are not changing the default routing.

    [CosmosDB(
    databaseName: "ToDoList",
    collectionName: "Items",
    ConnectionStringSetting = "CosmosDBConnection",
    Id = "{id}")
    ]IEnumerable<ToDoItem> toDoItems,
     

    This parameter is what will be returned by the function (eventually, because it runs asynchronously). It will be an object of type ToDoItem. This parameter is decorated with the CosmosDB attribute, indicating that we will automatically retrieve the list from the CosmosDB. The databaseName, collectionName, and ConnectionStringSetting tell the function exactly where to store the document. The id tells the function on which Id to filter the results.  

    ILogger log)

    The logger allows us to log information at points in the function, which can be helpful when troubleshooting and debugging.

    log.LogInformation($"Function triggered");

    Debugging information. Not necessary for the operation, but helpful when troubleshooting.  

    if (toDoItem == null)
    {
        log.LogInformation($"Item not found");
       return new NotFoundObjectResult("Id not found in collection");
    }
    else
    {
        log.LogInformation($"Found ToDo item {toDoItem.Description}");
       return new OkObjectResult(toDoItem);
    }

    We did not need to write code to query the database. This happens automatically. The code above simply checks if an item was returned matching the ID. If an item is found, we return a 200 (“OK”) HTTP response, along with the item. If no item is returned, we return a 404 (“Not Found) HTTP response.


    Retrieve a set of documents using a query

    The following function retrieves a set of document. A query tells the function how to filter, sort and otherwise retrieve the documents. In this example, we only want to return documents for which isComplete = true.

        public static class GetCompleteItems
        {
            [FunctionName("GetCompleteItems")]
            public static async Task<IActionResult> Run(
                [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
                [CosmosDB(
                    databaseName: "ToDoList",
                    collectionName: "Items",
                    ConnectionStringSetting = "CosmosDBConnection",
                    SqlQuery = "select * from Items i where i.isComplete")
                ]IEnumerable<ToDoItem> toDoItems,
                ILogger log)
            {
                log.LogInformation($"Function triggered");
    
                if (toDoItems == null)
                {
                    log.LogInformation($"No complete Todo items found");
                }
                else
                {
                    var ltodoitems = (List<ToDoItem>)toDoItems;
                    if (ltodoitems.Count == 0)
                    {
                        log.LogInformation($"No complete Todo items found");
                    }
                    else
                    {
                        log.LogInformation($"{ltodoitems.Count} Todo items found");
                    }
                }
    
                return new OkObjectResult(toDoItems);
            }
        }
      

    We will now explore this function in more detail:   

    [FunctionName("GetCompleteItems")]        

    The name of the function is “GetCompleteItems”.

    public static async Task<IActionResult> Run(

    The Run method executes when the function is triggered. This method is asynchronous and will eventually return an ActionResult.

    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req,

    The first parameter is the incoming HTTP Request. It is decorated with HttpTrigger, indicating this is an HTTP trigger. Within this decorator's parameters, we indicate that the function can be called anonymously, that it can only be called with an HTTP GET; and that we are not changing the default routing.

    [CosmosDB(
    databaseName: "ToDoList",
    collectionName: "Items",
    ConnectionStringSetting = "CosmosDBConnection",
    SqlQuery = "select * from Items i where i.isComplete")
    ]IEnumerable<ToDoItem> toDoItems,

    This parameter is what will be returned by the function (eventually, because it runs asynchronously). It is a list of objects of type ToDoItem. When serialized, this will be transformed into an array of JSON objects. This parameter is decorated with the CosmosDB attribute, indicating that we will automatically retrieve the list from the CosmosDB. The databaseName, collectionName, and ConnectionStringSetting tell the function exactly where to store the document. The SQlQuery tells what query to run to retrieve the data (in this case, return only rows with isComplete=true) It is important to note that I am using the JSON property (“isComplete”), rather than the .NET class property (“IsComplete”) in this query. Even though they differ only in their case, the query is case-sensitive.  

    ILogger log)

    The logger allows us to log information at points in the function, which can be helpful when troubleshooting and debugging.

    log.LogInformation($"Function triggered");
    if (toDoItems == null)
        {
             log.LogInformation($"No complete Todo items found");
        }
        else
        {
             var ltodoitems = (List<ToDoItem>)toDoItems;
             if (ltodoitems.Count == 0)
            {
                log.LogInformation($"No complete Todo items found");
            }
            else
            {
                 log.LogInformation($"{ltodoitems.Count} Todo items found");
             }
        }

    We did not need to write code to query the database. This happens automatically. The code above simply verifies that items were returned and transforms them into  List<ToDoItem> and stores this list in a local variable.

    return new OkObjectResult(toDoItems);

    We return a 200 (“OK”) HTTP response and the list of items.

    Conclusion

    Notice that in each of these functions, I did not need to write code to query or update the database. By decorating a parameter with the CosmosDb attribute, the function automatically took care of the database operations.

    You can find this code in the CosmosDBBinding solution in my Azure Function demos on GitHub.

    Wednesday, November 21, 2018 9:07:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, November 20, 2018

    In previous articles, I showed how to create Azure Function Apps and Azure Functions directly in the Azure Portal. You can also create Function Apps and Functions in Visual Studio and then deploy them to Azure. I prefer to do this, because it makes it easier to get my code into source control.

    Before working with and creating Azure artifacts in Visual Studio, you must install the Azure tools. To install these tools, launch Visual Studio installer and check "Azure Development, as shown in Fig. 1.

    AF01-AzureDevTools
    Fig. 1

    Once the Azure tools are installed, launch Visual Studio and select File | New | Project  from the menu, as shown in Fig. 2.

    AF02-FileNewProject
    Fig. 2

    In the "New Project" dialog, expand Visual C# | Cloud in the left tree and select "Azure Functions" from the list of templates; then enter a project name and location, as shown in Fig. 3.

    AF03-AzureFunctionTemplate
    Fig. 3

    The next dialog (Fig. 4) presents a list of options for your Azure Function.

    AF04-FunctionOptions
    Fig. 4

    In the top dropdown, select "Azure Functions v2".

    Select "Http Trigger" to create a function that will be triggered by an HTTP GET or POST to a web service URL.

    At the "Storage Account" dropdown, select "Storage Emulator". This works well for running and testing your function locally. You can change this to an Azure Storage Account when you deploy the Function to Azure.

    At the "Access rights" dropdown, select "Function".

    Click the [OK] button to create an Azure Function App with a single Azure Function.

    A function is generated with the following code:

    [FunctionName("Function1")]
    public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
    {
        log.LogInformation("C# HTTP trigger function processed a request.");
    
        string name = req.Query["name"];
    
        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
        dynamic data = JsonConvert.DeserializeObject(requestBody);
        name = name ?? data?.name;
    
        return name != null
            ? (ActionResult)new OkObjectResult($"Hello, {name}")
            : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
    }
      

    Listing 1

    The method is decorated with the "FunctionName" attribute, which provides the name of the function.

    [FunctionName("Function1")]
      

    Notice that the first parameter is decorated with

    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
      

    This tells the system that the Function is triggered by an HTTP request and that it will request either a GET or POST verb.

    We also pass in an ILogger, so that we can output debugging information.

    Let's walk through the code in this function

    Log some information, so we can confirm the function was properly triggered.

    log.LogInformation("C# HTTP trigger function processed a request.");
      

    If a "name" parameter is passed in the querystring, capture the value of this parameter.

    string name = req.Query["name"];
      

    If this is a POST request, there may be information sent in the request body. Retrieve this information and convert it to a JSON object:

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); 
    dynamic data = JsonConvert.DeserializeObject(requestBody);
      

    If the "name" parameter was passed in the querystring, use that; if not, look for it in the JSON object from the request body.

    name = name ?? data?.name;
      

    If a "name" parameter was found, return an HTTP Response Code 200 (OK) with a body containing the text "Hello, " followed by the value of the name.

    If no "name" parameter was passed, return an HTTP Response Code 400 (Bad Request) with a message into the body indicating a name is required.

    return name != null 
        ? (ActionResult)new OkObjectResult($"Hello, {name}") 
        : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
      

    Publish App

    One quick way to publish a Function App to Azure is directly from Visual Studio. To do this, right-click the project in the Solution Explorer and select "Publish" from the context menu, as shown in Fig. 5.

    AF05-RightClickPublish
    Fig. 5

    The "Pick a publish target" dialog displays, as shown in Fig. 6.

    AF06-PickPublishTarget
    Fig. 6

    Check the "Run from ZIP" checkbox.

    Select either the "Create New" or "Select Existing" radio button, depending whether you wish to deploy to an existing or a newly-created Azure Function; then click the [Publish] button.

    The follow-up dialog if you select "Create New" is shown in Fig. 7a and for "Select existing" in Fig. 7b.

    Click the [OK] or [Create] button at the bottom of the follow-up dialog to deploy the Function.

    This article showed how to create an Azure Function App in Visual Studio, making it easier to test locally and integrate your code with source control.

    Tuesday, November 20, 2018 9:41:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, November 16, 2018

    In a previous article, I showed you how to create a new Azure Function with an HTTP trigger.

    After you create an Azure Function, it is useful to be able to test it right in the Azure Portal.

    To test an Azure function, log into the Azure Portal, open the Function App, and select your Function, as shown in Fig. 1

    TF01-Function
    Fig. 1

    Click the [Run] button (Fig. 2) above the Function to open a Log output window and a Testing dialog, as shown in in Fig. 3.

    TF02-RunButton
    Fig. 2

    TF03-TestDialog
    Fig. 3

    In the Test dialog on the right, you can change the HTTP verb by selecting either "POST" or "GET" in the "HTTP method" dropdown, as shown in Fig. 4.

    TF04-HttpMethod
    Fig. 4

    If you select the "POST" HTTP method, the "Request body" section (Fig. 5) is enabled and you can modify the data you want to send in the HTTP Body of your request.

    TF05-RequestBody
    Fig. 5

    You can add querystring parameters to your request by clicking the "+ Add parameter" link under "Query" (Fig. 6) and entering a name and value of the parameter, as shown in Fig. 7.

    TF06-QueryParameters
    Fig. 6

    TF07-AddParameter
    Fig. 7

    Repeat this for as many querystring parameters as you need.

    Similarly, you can add name/value pairs to the HTTP header of your  request by clicking the "+ Add header" link and entering the name and value of each header, as shown in Fig. 8.

    TF08-AddHeader
    Fig. 8

    When everything is configured the way you want, click the [Run] button at the bottom (Fig.9) to call the web service and trigger your function.

    TF09-RunButton
    Fig. 9

    The "Output" section (Fig. 10) will display the HTTP response, as well as any text returned in the body of the response. Any response between 200 and 299 is good; any response of 400 and above indicates an error.

    TF10-Output
    Fig. 10

    If you function outputs log information, you will see this in the Log output window, as shown in Fig. 11.

    TF11-LogOutput
    Fig. 11

    In this article, I showed how to test a function from within the Azure portal. You should create more sophisticated automated test as part of your build/deploy process, but this serves as a good, simple way to make sure your function is behaving as expected after you create it.

    Friday, November 16, 2018 7:06:00 PM (GMT Standard Time, UTC+00:00)
    # Thursday, November 15, 2018

    GCast 22:

    Creating an Azure Function Proxy

    Learn how to create a proxy URL using Azure Functions

    Thursday, November 15, 2018 9:49:00 AM (GMT Standard Time, UTC+00:00)
    # Wednesday, November 14, 2018

    In the last article, I showed how to create an Azure Function App. A Function App is not useful by itself: it is just a container for functions, which perform the real work.

    Once you have created an Azure Function App, you will want to add one or more Functions to it.

    Navigate to the Azure Portal, log in, and open your Function app, as shown in Fig. 1.

    Fu01-FunctionApp
    Fig. 1

    Click either the [+] icon next to the "Functions" section on the left (Fig. 2) or the [New function] button at the bottom (Fig. 3)

    Fu02-NewFunctionIcon
    Fig. 2

    Fu03-NewFunctionButton
    Fig. 3

    NOTE: If this Function App already contains at least one function, the [New function] button does not display.

    The "CHOOSE A DEVELOPMENT ENVIRONMENT" page of the "Azure Functions for .NET - getting started" dialog displays, as shown in Fig. 4

    Fu04-ChooseDevEnv
    Fig. 4

    Select the [In-portal] tile and click the [Continue] button to advance to the "CREATE A FUNCTION" page, as shown in Fig. 5

    Fu05-CreateAFunction
    Fig. 5

    Two triggers are listed: "Webhook+API", which will cause your function to execute after a web service URL is hit; and "Timer", which allows you to schedule your function to run at regular intervals. You can see more triggers by clicking the "More templates…" tile; but, for this demo, select the [Webhook+API] tile and click the [Create] button. After a few seconds, a function is created with an HTTP trigger and some sample code, as shown in Fig. 6.

    Fu06-NewFunction
    Fig. 6

    This sample function accepts a "name" parameter (either in the querystring or in the Body of a POST request) and returns an HTTP 200 (OK) response with the string "Hello, ", followed by the value of the name parameter. If no "name" parameter is supplied,  it returns a 400 (Bad Request) response with an error message.

    You can now modify and save this code as you like.

    In the next article, I will show you how to test this function within the portal.

    Wednesday, November 14, 2018 9:59:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, November 13, 2018

    An Azure Function allows you to deploy scalable code to the cloud without worrying about the server or other infrastructure issues.

    Azure Functions are contained within a Function App, so you need to create a Function App first.  To create a Function App, navigate to the Azure Portal, sign in and click the [Create a resource] button, as shown in Fig. 1.

    FA01-CreateAResource
    Fig. 1

    From the menu, select Compute | Function App, as shown in Fig. 2.

    FA02-ComputeFunctionApp
    Fig. 2

    The "Create Function App" blade displays as shown in Fig. 3

    FA03-CreateFunctionAppBlade
    Fig. 3

    At the "App Name" field, enter a unique name for your Function App.

    At the "Subscription" field, select the Azure subscription with which to associate this Function App. Most people will have only one subscription.

    At the "Resource Group" field, select "Create new" and enter the name of a Resource Group to create or select "Use existing" and select an existing resource group in which to store your Function App. A Resource Group is an organizational grouping of related assets in Azure.

    At the "OS" radio button, select the operating system (Windows or Linux) on which you wish to host your Function App.

    At the Hosting plan, select either "Consumption Plan" or "App Service Plan". With the Consumption Plan, you only pay for the time that your functions are running. Since most functions do not run 24 hours a day / 7 days a week, this can be a real cost savings. With the App Service Plan, you pay as long as your functions are available. This is appropriate if you expect clients to be constantly calling your functions.

    At the "Location" field, enter a region in which you want your Functions to run. In order to minimize latency, you should select a region close to any resources with which the Functions will interact.

    At the "Runtime Stack" dropdown, select one of the platforms. Select ".NET" if you plan to write your code in C# or F#. Select "JavaScript" if you plan to create a node function. Select "Java" if you plan to write your code in Java. As of this writing, Java is in Preview, so performance is not guaranteed.

    If you selected "Consumption Plan" hosting plan, you will be prompted for a storage account. Function definitions will be stored in this account. Select an existing storage account or create a new one. I prefer to use a storage account for all my Function Apps in a given Resource Group.

    For extra monitoring, turn on Application Insights and select the same region in which your Function App is located. If this region is not available, select a nearby region.

    Click the [Create] button to create your Function App.

    After your Function App is created, you will want to add a Function to it. I will show how to do this in the next article.

    Tuesday, November 13, 2018 9:54:00 AM (GMT Standard Time, UTC+00:00)
    # Friday, November 9, 2018

    Azure Functions provide a simple way to deploy code in a scalable, cost-effective way.

    The beauty of Azure functions is that the developer does not need to worry about where they are deployed. Azure takes care of spinning up a server and other resources at the appropriate time and scaling out a Function as demand increases. The infrastructure is abstracted away from the developer allowing the developer to focus on the code and the business problem.

    Azure functions can be written in C#, F#, JavaScript, or Java.

    Each Function has a "Trigger" which, as the name implies is an event that causes the function code to run. This can be an HTTP request, a message on a queue or message bus, delivery of an email, data inserted into a blob storage container or CosmosDB database, or a timed interval.

    Triggers are just one way that Azure functions can easily connect to other services. There are also bindings available to interact with databases, queues, and other services with a minimum of code.

    One nice feature of Azure Function Apps is the "Consumption Plan" pricing model. Selecting this plan means that you are only charged while your function is running, which can save plenty of money - particularly if your app is not running 24 hours a day every day. Of course, you can also choose to run your function as part of an App Service Plan, in which case you will pay for the entire time the function is available, whether or not it is running. This may be desirable if you already have an App Service Plan running and want to include your functions in that same plan.

    You can create functions directly in the Azure Portal. Or you can create them locally using tools like Visual Studio and Visual Code and deploy them either directly from the IDE or through your continuous integration processes.

    The source code for the Azure Functions run-time is even open source! Check out the code at https://github.com/Azure/azure-functions-host.

    You can get a free Azure account at http://azure.com. You can read more about Azure functions at https://docs.microsoft.com/en-us/azure/azure-functions.

    In upcoming articles, I'll show you how to create, deploy, test, and manage Azure functions.

    Friday, November 9, 2018 9:25:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, November 8, 2018

    GCast 21:

    Azure Functions Continuous Deployment

    Learn how to configure continuous deployment from GitHub to Azure functions. Each time you push code changes to GitHub, that code is automatically deployed to Azure.

    Thursday, November 8, 2018 9:48:00 AM (GMT Standard Time, UTC+00:00)
    # Tuesday, November 6, 2018

    Azure CosmosDB is a flexible, fast, reliable, scalable, geographically distributed NoSQL database.

    You can create a CosmosDB account and database in the Azure poral.

    Navigate to the Azure portal and login.

    Click the [Create a resource] button, as shown in Fig. 1.

    CDB01-CreateResourceButton
    Fig. 1

    From the menu, select Database | Azure CosmosDB, as shown in Fig. 2.

    CDB01-DatabaseCosmosDb
    Fig. 2

    The "Create Azure CosmosDB Account" blade displays, as shown in Fig. 3.

    CDB03-CreateAzureCosmosDbAccount
    Fig. 3

    At the Subscription dropdown, select your Azure subscription. Most of you will have only one subscription.

    At the Resource Group dropdown, select an existing resource group or click "Create new" to display the New Resource Group dialog, as shown in Fig. 4.

    CDB04-CreateResourceGroup
    Fig. 4

    In the New Resource Group dialog, enter a unique name for your resource group and click the [OK] button.

    At the "API" dropdown, select the API you want to use to access the databases in this account, as shown in Fig. 5.  Options are

    • Core (SQL)
    • MongoDB
    • Cassandra
    • Azure Table
    • Gremlin (graph)

    CDB05-Api
    Fig. 5

    If you are migrating data from another database, you may want to choose the API that resembles your old database in order to minimize changes to the client code accessing the database. If this is a new database, you may wish to choose the API with which you and your team are most familiar.

    At the "Location" dropdown, select a region in which to store your data. It is a good idea to keep your data near your users and/or near any services that will interact with your data.

    The "Geo-Redundancy" and "Multi-region writes" options allow you to globally distribute your data. There is an extra charge for enabling these features.

    You can enable Geo-Redundancy by clicking the [Enable] button next to "Geo-Redundancy". This creates a copy of your data in another nearby region and keeps that data in sync.

    Click the [Enable] button next to "Multi-region writes" if you wish to allow data to be written in multiple regions. This will improve the performance when writing data to the database.

    Notice the tabs at the top of the page (Fig. 5). The "Basics" tab displays first, but the "Network", "Tags", and "Summary" tabs are also available.

    CDB06-Tabs
    Fig. 6

    The "Network" tab (Fig. 7) allows you to add your CosmosDB account to a specific Virtual Network and Subnet. This is not required.

    CDB07-NetworkTab
    Fig. 7

    The "Tags" tab (Fig. 8) allows you to assign metadata to this CosmosDB account, which may help when grouping together related accounts on a report. This is not required.

    CDB08-TagsTab
    Fig. 8

    The "Summary" tab (Fig. 9) displays all the options you have chosen and validates that you completed the required responses and that all responses are consistent. You can navigate to this tab by clicking the "Summary" tab link at the top or by clicking the [Review + create] button on any other tab.

    CDB09-SummaryTab
    Fig. 9

    Click the [Create] button to begin creating your CosmosDB account. This will take a few minutes. A message displays as shown in Fig. 10 when the account is created and deployed.

    CDB10-DeploymentComplete
    Fig. 10

    As you can see, there are a number of links to documentation and tutorials.

    Click the [Go to resource] button to open the CosmosDB account. By default, the "Quick start" blade displays, as shown in Fig. 11.

    CDB11-CosmosDBQuickStartPage
    Fig. 11

    In this article, I showed how to create a new Azure CosmosDB account. In the next article, I will show how to add a database with containers to that account.

    Tuesday, November 6, 2018 6:28:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, November 1, 2018
    Thursday, November 1, 2018 9:02:00 AM (GMT Standard Time, UTC+00:00)
    # Monday, October 29, 2018

    Episode 535

    Rajasa Savant on Serverless Azure

    Microsoft Engineer Rajasa Savant describes the "Serverless" technologies available in Microsoft Azure

    Monday, October 29, 2018 8:56:00 AM (GMT Standard Time, UTC+00:00)
    # Thursday, October 25, 2018
    Thursday, October 25, 2018 9:37:00 AM (GMT Daylight Time, UTC+01:00)
    # Thursday, October 18, 2018
    Thursday, October 18, 2018 9:54:00 PM (GMT Daylight Time, UTC+01:00)
    # Thursday, October 11, 2018
    Azure | Database | GCast | Screencast | Video
    Thursday, October 11, 2018 3:20:41 PM (GMT Daylight Time, UTC+01:00)
    # Thursday, October 4, 2018
    Thursday, October 4, 2018 4:08:28 PM (GMT Daylight Time, UTC+01:00)
    # Thursday, September 27, 2018

    GCast 15:

    Creating an Azure Web App

    Azure Web Apps allow you to host your web sites and applications in the cloud. I walk through the steps of setting up an Azure Web App.

    Azure | GCast | Screencast | Video | Web
    Thursday, September 27, 2018 9:42:00 AM (GMT Daylight Time, UTC+01:00)
    # Sunday, September 23, 2018

    The Microsoft Bot Framework makes it easier to create a chatbot. But a chatbot is only good if your users have a way of calling it. Microsoft bots can be accessed via a number of channels, including Facebook Messenger, Microsoft Teams, Skype, Slack, and Twilio.

    Bot Settings

    Once you have deployed a bot to Azure, you can view its properties in the Azure Portal, as shown in Fig. 1.

    Fig01-BotProperties
    Fig. 1

    Click "Settings" to display the "Settings" blade, as shown in Fig. 2.

    Fig02-BotSettingsBlade
    Fig. 2

    You will need the "Microsoft App ID" value (Fig. 3)