Just recently, I was selected by Microsoft to participate in a Azure Logic Apps study. They reached out via email and we got started.
Disclosure: Although, I have created lot of Power Automate flows, I had never created a logic app earlier. So I was really excited to participate in this study and get to learn more about Azure logic apps.
So, I got a choice of four scenarios to implement a logic app by the Microsoft team and I could implement any one of them. Here are some of the key terms of the study :
I couldn’t read any docs or blogs prior to the study
I had to use my own Azure account – Of course, I have a developer tenant of my own.
During the study, I could refer to docs or blogs to complete the study
The whole study was recorded
Here’s the scenario I chose :
“Imagine that you have been asked to create a system that would automatically organize the pictures that your company’s photographers take and upload. Every time a new image is uploaded, you want to run it thru computer vision API. If there is more than 90% chance the image contains a person, save it to one blob container, if it doesn’t, save it to another blob container. Please use Azure Logic Apps to do this.”
Let’s look at I how I had implemented this –
First, here’s what the overall steps (trigger and actions) of the azure logic app look like
Let’s dive into each step in detail :
Step 1 – Image is added to SharePoint
Since the study didn’t mention where the image would be uploaded to, I naturally went with SharePoint for this. So the trigger of the app is when a file is created in a folder. Input all the details, the SharePoint site, the library and how frequently you want to check for the items being added.
Step 2 – Initialize a Boolean variable
Initialize a Boolean variable to determine if a ‘Person’ was detected in the image uploaded.
Step 3 – Detect Image – Computer Vision
This step is the crux of the app. It is an action based on the computer vision API connector. Computer Vision API is a Azure cognitive service that can be used to extract information from images to categorize and process visual data.
The ‘Detect Objects’ action generates a list of detected objects in the image supplied to it. The image source can either be the content of the image file itself or a reference URL. I chose to include the file content generated as a part of output from Step 1.
Step 4 – Evaluate each object detected
This is where the logic kicks in. The step above may detect multiple objects and we need to evaluate each object to verify if it has a person and the confidence score with which the person was detected is greater than 0.9.
If both the conditions evaluate to true then set our variable to true.
Step 5 – Check if person exists
I had created a storage account earlier with two containers – With Person and Without Person.
In this step we verify if our variable is set to true, then we create a new blob in the ‘With Person’ container. If the variable is set to false, then we create a new blob in the ‘Without Person’ container.
We do this using the ‘Create blob’ action which uploads a blob to the Azure Blob Storage.
This is how I completed the study and also got to create my first logic app.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
Thanks Kunal for this detailed and informative post. I tried this at my end and it worked like a charm.
Thanks Manoj! Glad that you liked it.