What if Inspection forms were backed by Machine Learning?
10. maj 2017
Af August Engkilde
We all know that checklists are a good way to remember to do things. We use them sometimes privately, like a grocery checklist, the holiday packing checklist, your personal goals checklist and so on. So, what’s the problem? You made a checklist but you did not set anything up to ensure, that you actually did complete the todo’s on the checklist. Or you didn’t recognize that the result of your lists may have patterns that you can learn from to improve your next checklist.
At work, you make checklists for yourself in order to remember to do stuff during the day, or your actual work may be to check and measure things into variations of checklists, like inspection forms.
The biggest issue with all those lists and forms, as I see it, is that there is too often no consequence as a result of a check or measurement, or even a missing check… and the work can therefore be worthless, a waste of time, unnecessarily stressful, or may even lead to dangerous situations.
Some companies use mind maps when developing inspection forms, and that is not a bad idea. This way, you can map out what to check and measure, what to analyze and what to do thereafter. In addition, some companies can generate new checklists for the end users, by doing simple analytics of earlier inspections. But this relies mostly on predefined patterns, like “after an object fails ten times in a row, replace the object” and the question is, does that always lead to the right action or consequence? And could this be handled automatically in a more dynamic and intelligent manner?
I find it very interesting to investigate further into new possibilities and solutions, regarding this matter.
How much can we use machine learning to analyze inspection forms and suggest actions?
Can we use automatic analytics from pictures and video, even 360° video? Can we even use picture recognition to take a picture of an object, for instance, a potted plant or any other asset, and then have the machine learning services like IBM’s Watson, recognize that object and send back a to-do list and an inspection form for that specific thing. We at 2BM actually have already made proof of concept apps that can do this, in a simple scenario using plants and fruits. Even oral feedback can be given to the user, like: “The Orchid plant looks like you have given it too much water. Does your finger get wet if you stick it into the plant soil? Yes [ ] No [ ]. If ‘Yes’, do this…. if ‘No’, do this…. Tomorrow, I will send you a reminder and a list of new inspections to do, and I will also ask you to take a new picture – have a nice day! -By the way, it looks like there is a crack inside of the pot, maybe you should get a new one before it breaks!”
This is a seemingly primitive example, but still very complex. And it gives thought and ideas to many interesting aspects. Can we actually recognize the more complex object(s), and can we assist a person to do better and more relevant inspections and take better actions?
This leads to why dynamic creations of form based applications are a good starting point for your business.
As a consultant, I frequently run into companies creating Inspection form applications that are statically made. This means that they have to recompile or create a new app if the Inspection form needs new checks points or a new project requires a completely new Inspection form. This should never be necessary if the front-end presentation of a list of checklists, and the checklists itself, are programmed in a way that these pages are built dynamically, from a schema delivered by the backend. In this way, the app is just a generic app, that can load different versions of checklists, dependent on the schema received from the backend at the load time. Which schema to load can be determined by a backend administrator, by a selection list in the app itself, by the user role, the user Geolocation, the temperature, the altitude, by a picture taken by the device camera, or by any other relevant predetermined information.
The solution should of course be backed by a simple App admin page, from where new inspection forms can easily be made by dragging and dropping as well. And the model should always support the field validation.
But the most interesting part begins when we can let Machine Learning mechanisms create new Inspection forms for us, or if we can let Machine Learning algorithms determine what actions should be taken on the basis of analytics from the incoming Inspection forms. For example, a sign of a dangerous situation on the construction site could initialize a drone or a robot to be is sent to the location to make further checks…..
OK, I admit that maybe my imagination is very optimistic, but it is only by testing and exploring new ideas, that we learn…
is an Enterprise Mobility Consultant at 2BM. He has worked on an extensive amount of custom mobile applications for more than 10 years. His main focus is on giving the end user the best possible personal user experience as well as ensuring that the right analyzable dataset gets back to the company’s database.