Why Azure is so remarkable

Technology moves so fast these days its easy to take for granted the innovations in recent years,  combined with exponential accessibility of new technologies made available to developers at large.  Azure continues to usher in new possibilities of virtually unlimited scale, while removing the burden of overhead, giving us resources we could have only dreamed of just a short time ago.  In that sense, Azure is the Robin Hood of technology, giving the common developer what only those with lots of resources enjoyed previously!  What I’m referring to here transcends the technical.  It’s about empowerment.   In keeping of Microsoft’s every person and every organization on the planet to achieve more,  Azure levels the playing field by giving us the power of super computers, AI, and internet of things, all with global scale and ease of accessibility.

This ushers in a new approach to architecture.  No longer is it necessary to write from scratch some of the traditional models like pub sub, you can stand up services for that.  Take event hubs as an example.  A globally ready,  event ingestion service that can scale to millions of messages per second.    Aha!  You just did it didn’t you?  That last sentence blew past and perhaps it wasn’t as astonishing as it would have been to you a few years ago.  But today, it has become more commonplace and acceptable.  Its worth repeating, millions of messages per second.  Try standing up your own servers 10 years ago that could handle this volume of data, and at an affordable price.  This alone makes possible solutions for consultants such as myself to offer to small and medium size businesses with virtually no overhead and investment.  Where I would have (and have before) struggled by provisioning servers and/or hosting together with writing my own message based distributed system, I can now just spin up a hub and I’m good to go.

Indeed the inspiring Microsoft commercials featuring Common speak truth to the times we currently enjoy.  The future we look forward to.  The possibilities available to us.  We’re experiencing a time unlike I’ve ever seen before.  Its a great time to be a developer.

Monitor your API’s with Postman

Do you write and maintain APIs?  Do you know how they’re performing?  If any of your endpoints are returning data, are you sure the schema of the data returned is correct?  You could write your own monitoring solution from scratch, but who has time to do that?   I’ve found Postman to be a great solution for monitoring my APIs. Granted, using it is like a car mechanic taking their car to a service center to get their oil changed, but for me its about saving time.  Tons of time.

Postman has so many features, it would be tough to write about all of them in one blog entry.  I’ll tackle my favorites for now.   First, with Postman, you can fire off scheduled, automated requests to your APIs which are grouped into collections,  with monitors.   Monitors not only let you call your endpoints with optional data parameters, but you can check the results of each call with configurable tests.  Postman can also automatically generate API docs for you, complete with a curl endpoint call ready to copy and paste into your shell.  You can even publish these docs with a paid tier subscription.

Postman has an online dashboard as well as a desktop app component. Here’s my dashboard containing a couple monitors I have set up.

Postman online dashboard

With the desktop app you can set up, configure and call any of your endpoints, set collections and monitors, then sync them to the cloud.  Collections between the online dashboard and desktop component are synchronized when you are signed in.  This is a great feature if you want to check things on the go, or are switching between computers.   Here’s a view from the online dash of a call history from a monitor I have set up.

Postman Collection History

There are several usage plans available, including a free tier, Pro and Enterprise tier, each with features you can read about.

Another amazing feature is the API Network.  This allows you to pull in ready made API client configurations from published third parties right  into your desktop app.  Here’s an example from the Bing Cognitive Services API.  You can see my query is all set up for me, I just have to obtain a key and replace the placeholder value with it.  This is super cool.

Postman Bing API

But wait, there’s more.  You can also set up mock servers for your endpoints before they are even live!  This is great for when you want to simulate a response from a call before you’ve written the code.    Full disclosure, this is not a paid endorsement of Postman, its just that neat of a tool.

Telemetry data with Raspberry Pi 3, Windows 10 IoT core and Azure IoT Hub

I had a great time getting the Raspberry Pi 3 set up with Windows 10 IoT Core, a BME280 temperature sensor, and sending temperature data to Azure IoT Hub.  This was my first experience with each of these components, so I was learning as I went.  Working in technology, if you’re not used to or not comfortable learning as you go, you might want to take a step back and re-examine why.  Its well known that change is common in this field, and we should embrace the unknown as opportunities to learn and grow, not merely stay in our comfort zones.  Staying in your comfort zone will get you left quickly behind.  I’m reminding myself of this as much as anyone else, but I digress.

First off, Windows 10 IoT Core.  Mind blown.   Extra points for exceeding expectation.  This is a really useful  operating system geared toward IoT devices, including the Raspberry Pi.  I didn’t know what to expect, and was super stoked about how easy it way to set up.  I was expecting to have lots of trial and error, set up time, guessing, incompatibility and a steep learning curve.   But with Windows 10 IoT core, not only flashing my micro-SD to install it, but setting it up and getting it working with my Raspberry Pi was a breeze.   Here’s  a quick step by step to get started.

1. Download and install Windows IoT Core Dashboard
2. Insert your SD card into your computer and navigate to the ‘Setup a New Device’ tab in the dashboard
3. Select the version of IoT Core you want to install, a name for your device, and an administrator password
4. Click ‘download and install’. The image is downloaded from the internets and installed on the card

You’re good to go now.  Slide the SD card into your PI and boot it. The IoT Core Dashboard should recognize your PI on the network, and it will appear on the My Devices tab.   If it does not try reading this.    When you see the device in the IoT Dashboard, right click it to open a browser window, sign in with the admin user name and password you provided, and you will be interacting with the OS through the Device Portal.

Windows 10 IoT Core has some pretty neat tools built in that you can navigate to through the device portal.  You can view installed apps, which apps are running, as well as start and stop apps.   There are even sample apps you can deploy to your device and run.

Windows 10 Iot Core App Samples

View CPU, View memory, I/O and network usage

Viewing Resource Usage in the Windows 10 IoT Core Viewing Available Networks in Windows 10 IoT Core Device Portal

View available networks

Viewing Available Networks in Windows 10 IoT Core Device Portal

For this post, I’m going to demonstrate how I was able to use the BME280 with the Raspberry Pi and Windows 10 IoT Core to send temperature data to Azure IoT Hub.  One of the great things that attracted me to Winows 10 IoT core was the ability to write C# code to access, interact with and control the various components of the Raspberry Pi.  For this example, I was able to use Visual Studio Community Edition, and mix and match some samples I found online either through the sensor vendor or MSDN.

So now to the fun part and back to ‘mind blown’ when I was actually writing C#, not Java, not Python, not Node.js, C# to interact with the Pi, I/O pins and everything.  We’ve all got our favorite languages, C# happens to be mine.  Not only that, but I was able to send messages to the Hub.

The IoT Hub is interesting and revolutionary.  I say revolutionary because I remember not long ago when such concepts were not available and easily accessible to the masses.  There were no (limited) options to send data into a nebulous cloud endpoint, with virtually unlimited scale, and plug in any number of listeners which then can take action via any number of workflows or triggers.  There are solutions I’ve built in the past that could have benefited from using the Hub.  I wish I could go back and retrofit these solutions using the Hub now.   Sending temperature data, or any other data to the IoT hub, and for that matter, the Azure Event Hub kind of implements set and forget in a new way, sort of a send and forget.   Get your data up to the cloud and handle it there.  The Hub acts as an abstraction layer between ingestion and processing.  Moreso, a “component or service that sits between event publishers and event consumers to decouple the production of an event stream from the consumption of those events”  This is easily understated and misunderstood.  Its truly remarkable, and also made easily available at scale with Azure.

Let’s dive into the code.  I’m using an open source library to access the temp readings from the BME280 sensor.  First, I just new up a timer to call the _timer_Tick method when the app starts and the entry point is hit (OnNavigatedTo event of the main page)

Then in the timer tick event, I read the temp from the sensor

Then send it to the hub using the Microsoft Azure Devices Client

Don’t forget to new up your devices client with your hub connection string and device ID, which you get when you set up your hub and register your device.

To deploy this app to your device, right click on the project in Visual Studio, then click properties. On the properties page, select the Debug menu item, then set “Remote Machine” as your target device.  Set the IP address of your remote machine in the Remote Machine setting.  This will deploy the app to the IoT device, and allow you to debug it.

Deploying to Windows 10 IoT Core from Visual Studio

Now let’s see the data in real time as it arrives in the hub. To do this, we’ll download, install and run the Windows Device Explorer.   Enter your hub connection string on the connection string tab.   Your device will appear on the management tab.  Select that device, then click on the data tab and click monitor.  You should see the telemetry messages appear as they are sent to the hub.  Here’s mine:

View device to cloud messages using Device Explorer for IoT Hub

So there you have it. Now imagine scaling this to 1000 devices, then building a reporting and monitoring solution from this. Happy coding!

Say goodbye to debugging Xamarin iOS on a different machine with Remoted iOS Simulator (for Windows)

The Remoted iOS Simulator (for Windows) just came out for Visual Studio, and if you haven’t tried it, I definitely recommend doing so if you are tired of remoting into your MAC build server to debug your Xamarin iOS apps.

The Remoted iOS Simulator allows you to “Test and debug iOS apps entirely within Visual Studio on Windows”.  I must say having tried it out, I found it works as advertised!  It’s wonderful to be able to run a fully live, real iOS app in a simulator window locally.  It saves time as I don’t have to remote into my Mac build server or be physically near it to interact with a running app.

Simply download the installer, run it, and you’re good to go.  Set your iOS Xamarin project as the startup project.  When you hit ‘debug’ a local iOS simulator pops up and runs locally right on your machine.  You can interact with the app as it runs on the Mac.

The simulator has some basic but useful features like locking the home screen, setting the location, and the ability to rotate and shake the device.

Here is a screenshot of a map view in a real app.  Giddy!  Best of all, the simulator stays running and connected to your Mac after you stop debugging so can use a live Xaml previewer like Gorilla Player for design time visuals.  This is one breakthrough that deserves full applause from the masses.

An app running in the Remoted Simulator for Visual Studio on Windows

 

 

 

 

 

Calling all Datatables!

We use datatables a lot, and a problem we face is column headers not adjusting when the user expands or collapses the sidebar menu in our UI template.   The datatables table body expands or shrinks according to the horizontal page space gained or lost when the sidebar menu expands or collapses,  but the headers for some reason do not.  The result is a misaligned table that looks messy.

I decided to address this once and for all, and went looking for a solution I could implement at least on a page by page basis.  I quickly discovered this approach would be extremely inefficient at best,  since I was dealing with a system with dozens and dozens of tables,  if not hundreds.  And that approach isnt very DRY.  Let’s face it, that approach is never really the best approach.

Lo and behold, a quick search turned up a call in the datatables API where I could summon all visible datatables instances in the page,  whatever the page may be.  Kind of like a bat signal!  I put this in the shared/layout of my ASP.NET MVC app, and it is invoked on each page when the  sidebar menu expands or collapses.  Voilà!  If there are any datatables instances on the page, the columns adjust.  I had to put the delayed call to the datatables API in because the adjustment was happening before the class change had taken place.