COMPUTER VISION IN AGRICULTURE
The future of agriculture will be powered
by AI, machine learning, and like-minded teams. Machine learning may not
just be useful for complex, data-heavy, all-knowing tools but the future of
farming will be far more visual. Plant scanning is already here, and no longer
relegated to in-plant pets who peck at notes on a clipboard (Note: One thing I
have learned after working in agricultural tech for over a decade is that
sometimes the smartest tools are boring). As more video processing power enters
our hands, it is likely computer vision will be more present on the farm, and
crops will be monitored remotely and photographed as soon as they are rooted.
If they need to be harvested, they’ll be seen in their at-the-time purest form
in the end, a true sign of maturity.
However, although computer vision is
being applied to farming in the field, I don’t imagine it making its way to
near-term tech products. Designing for computer vision has too many caveats and
complexities that an agricultural developer and user of farm equipment would
have to contend with. You need a powerful SQL framework with large-scale access
to massive farm datasets. Of course, a farmer’s PC would need access to those
massive datasets as well as the development team’s knowledge of crop fungicide
effectiveness, labels on fertilizer particles, and other data that could be
manually dug up using Google Earth. And it will still be a challenge to have
enough power and network connectivity enough to run detailed scans, so in a
low-cost dataset like a commercial farm, computers would be starting at a
disadvantage compared to a farm with many more detailed farm records. In the
event that there is high demand for computer vision in farm maintenance,
additional training, and manufacturing, our research will likely leverage the
use of real-world photos and public clouds of training data to further the
cause.
The software development team at On
Device Pro have partnered with growers and our co-workers at On Device Pro.
From taking high-resolution portraits to on-farm capacity tests, the farmers
that have already made use of computer vision solutions use them to adapt or
perform specific tasks in the field. Sometimes they are planting through the
use of panoramic images of plants or inspecting the movement of workers or
commercial products in the field. Data is tremendously valuable. One of the
simplest ways to understand and compare performance in different plants is by
looking at how long each is exposed to sunlight and water. Furthermore, all of
the farm’s machine learning products from predictive farm analysis to object
detection are based on unstructured data derived from those over-data-loaded
observation sessions. So, the optimal state of these platforms is data-centric.
With that in mind, our work on vegetation sense is very organic and much more
education is needed.
Many of the technology can be used in
diverse ways, ranging from small nuggets of information that provide visual
feedback about the health of the crops being grown, to large-scale image
processing. Our work could provide additional data about our crops or make
applications in a number of different fields. However, computer vision is
likely to face an uphill battle in the right conditions.
A team of researchers from the California
Institute of Technology recently published their results which outlines a
potential solution to see into plant leaves to inform the correct treatments.
In many settings, such as on a medical
bed or on a machine learning board, a computer system is being used to examine
deep slices of a piece of tissue to offer diagnostic insights. It can identify
abnormal tissue regions. Sometimes, a dermatologist needs to view the same
tissue to interpret facial dermatitis diagnoses. Not only does such a
technology mark the single role of a regular microscope in skin diagnosis, but
it also has the potential to analyse plant and animal tissue differently.
The team initially created this
technology as an engineering project. By developing machine vision to detect
tissue wound photos (LCMS) — where perforations are detected by the microscope
film instead of the cells — and applying such analysis to plant and animal
tissue, the team hoped to gain an unprecedented ability to capture skin diseases
early.
To make such a system work, the
researchers first applied computer vision techniques to tissue grafts such as
bone grafts. The images were used to train a computer system that derived a
machine-learned tissue map of the wound around the organ graft, and with these
maps, the system could classify and distinguish tissue areas from cells.
Computer vision also performs a more
complex kind of “landmarking” of foliage. The researchers fixed a landmark in
the leaf using mechanical pruning tools and an optical gel probe. Once the
researcher fed the images through the system, it successfully identified and
segmented with the marker into different tissues.
The computer had to be trained to segment
tissues precisely. The system only focused on a fraction of tissues, and less
than half of them had consecutive micrometre scales. Because tissue age is
difficult to establish without small areas in a larger body, classifying each
of them was challenging. One consequence was that each tissue was only segmented
into smaller single cells, which blocked them from forming organs. One solution
was to classify tissues as single individual tissues.
Combining all these images together was a
bit like a video game pilot in which the main character tries to manoeuvre into
an aircraft using a control system. As the pilot tries to pull off a complex
manoeuvre, they see blocky blocks come in and out of the cockpit, meaning they
need to aim in the right direction if the manoeuvre is to be successful. To
pull off such a manoeuvre is probably extremely difficult. In this case, the
pilot needs to carefully measure the position of the patches within the patches
— and this required very precise data.
The researchers adopted a particular
strategy for classifying tissue. They introduced photos of the tissue to the
system as a fixed axial point on which a boundary can be derived from. In this
way, the system could see around the tissue and class it as either any state or
an outlier with some corresponding cancerous tissue. Using this example, the
robot could decide if it needs to apply a poisoner antibiotic, if antiseptic
coatings are needed, or if an antibiotic should be applied.
The researchers tested this system on
several real research plants such as basil, soybean, leafy greens, and various
plant-based lipoproteins. Without an effective system to classify tissues well
enough, the researchers did not observe any tissue change, and yet they did
receive results from the research plant just fine. Theoretically, the
researchers could augment this system with artificial intelligence systems to
adjust blood perfusion and surface treatment and perhaps even better detect a
tumour.
Most experimental research on the
integration of computer vision (and its offspring, Deep Perception), is led by
algorithms and practitioners who need more practice in these fields to adapt to
applying chemistry and biology. The lesson here, though, is that science is not
about staying thin; it’s about weight; it’s about tackling key challenges and
honing development. Now that there’s been successful innovation in the form of
this technology, there’s the need to build off of it to create systems with
greater applications.
No comments:
If you have any doubts, Please let me know