ITC571 - Update 6 - Inspiration!

Inspiration and Epiphany

I still couldn’t find out my purpose for this whole thing, the fundamental findings of my review that I would talk about. So I was getting severe writers block, but I figured just start the damn thing and you’ll figure it out. So I started in the middle of the report on the literature review.

And then out walking with my daughter I had the thought in my mind, it builds on the thoughts from the previous blog in update 5. That the real thing is that we need to standardise the data-sets and data collection methods used for machine learning in pest detection. To make it practical and useful we need to make it easy to collect the images or video that can then be used to determine pest incidence and severity.

And thats the crux of the whole thing!! Split the models out, automate the process of data collection with a ligthweight YOLO model, feed those images to the cloud or somewhere that has a bit more compute and then classify them there. Saves dragging a GPU through the midrow or trying to stream video back to the cloud via a dodgy 3g connection that doesn’t work in the dip near the creekline…..

So with that final puzzle piece I’ve been able to write far more easily these last few days and have nearly got the thing finished now. tying it all back to this two tier model means it all make sense to me now. It even solves the problem of model development and comparison. If we can just strap a phone on the back of a spray rig and generate standardised data sets of vine structures then we’ll have so much data to train and test on.

Tang et al. 2023 had the right idea strapping a smartphone to a trailer! This is how we automate the image collection!

https://www.hindawi.com/journals/ajgwr/2023/8634742/