If you’ve been paying attention to the news, there’s little chance you’ve missed headlines about the latest applications of big data – there’s the blockchain, and driverless cars and artificial intelligence.
Protecting biodiversity might not be the first thing that comes to mind, but our Data61 team is dedicated to using data science to solve pressing problems, and protecting biodiversity is a venture that grows in importance every single day.
Our long-running project in the Amazon Rainforest, ‘Project Providence’, is entering its final phase. We’ve teamed up scientists from Brazil and Spain to deploy a remote monitoring system, and our experts have just stepped off the plane from a trip testing the deployment of the network. Providence will involve a continuous monitoring system that will act as the eyes and ears of the Amazonian forest by using a wireless network of sensors in the Amazon to monitor the activity of species including jaguars, monkeys, bats, birds, reptiles and even dolphins. Providence is multi-institutional – we’re partnering with the Mamiraua Institute (Brazil), the Federal University of Amazonas (Brazil), and the Sense of Silence Foundation of the Technical University of Catalonia (Spain).
In addition to Providence, our robotics and autonomous systems experts tagged along, testing the capability of a new, six-legged robot to navigate autonomously in the rainforest.
Providence and its eyes – peering at biodiversity
Phase one of Providence commenced in December 2016 – the team was granted nearly $2 million in research funding from the Gordon and Betty Moore Foundation, the iconic American foundation established by Intel co-founder Gordon E. Moore and his wife Betty. The plan was to use our technological innovation to monitor biodiversity in the Amazon on a scale that hasn’t been seen before, and use multiple technologies including acoustics, visual and thermal imaging to do it.
So what’s happened in the year and a half since then? Our most recent trip was to deploy and flick the on switch on our sensors. Dr Paulo Borges, the project leader for our component of the partnership, said there are ten locations, each several kilometres away from each other. We’re in the process of getting these nodes up and running, and we’re already receiving data from the network.
With the help of local experts from the heart of the Amazon forest, the team was climbing up trees, installing solar panels with batteries, and then activating cameras and other sensors. It was a true field trip using boats and hikes to get to really remote areas. The cameras detect movement and take a photograph and lets us know if it’s significant, like an animal passing by.
“No matter how advanced technology is, we can never underestimate the challenges around the heat, humidity, density and vastness of a rainforest, particularly the Amazon. The area is particularly challenging because it is known as a varzea region, which is exposed to 10 meters of flooding every year. It is amazing to see how the fauna and local communities adapt to these circumstances,” Paulo said.
The red robot roaming the rainforest
A network of sensors in the rainforest is, by nature, fixed to a single location. But during this trip to the Amazon, we wanted to check if autonomous legged robots to help solve this challenge and give us a wider view of the area. To get around that we’ve developed a range of legged, wheeled and flying robots (read about them here). One of our mechatronics engineers, Ryan Steindl, travelled with the Providence crew to the Amazon and tested out his new robot in the field.
“We wanted to test our technology in a real, remote environment. We put it through its paces, making it walk over new terrain and seeing if it could move autonomously in that new terrain towards a targeted area,” Ryan said.
Ryan and his bots discovered some interesting nuances of navigation in the Amazon. “The robot was able to be deployed by a single person in the field – a rarity in field robotics. We want stability in an environment where you can’t necessarily see the true foot holds the robot will use,” Ryan said. “In the forest, how does a robot distinguish the difference between a small sapling you can push over and a vine that will entangle you?”
Our next steps are to improve our ability to be sure footed in situations where we don’t necessarily know the environment we are stepping in. This will ultimately increase our speed and ability to climb over extreme terrain.
Ryan’s red robot and Paulo’s complex sensor network are two excellent examples of how we use science, technology and engineering to solve pressing problems. In this case, the serious challenge of biodiversity in the Amazon can be partly met through collaborative efforts, and thorough, real-world testing.
Are these the legged bots you're looking for?
We’re developing systems that sense, see and navigate, to do jobs too dirty and dangerous for humans.
Pingback: CSIRO is building autonomous robots to venture deep underground - Create News
Pingback: Amazon 360: testing self-navigation in a novel landscape – Algorithm
Pingback: Media and Impact – Issue 6 – Algorithm
1st June 2018 at 3:16 pm
Impressive robot. Is the occasional “mantis quiver” part of its sensory/orientation system or just an artifact of point terrain adaptation?
4th June 2018 at 1:23 pm
Rod, that’s right – the ‘quiver’ is related to the robot stabilising itself, based on feedback from sensors and joints in the machine’. Thanks for reading. (Jesse & Ketan, CSIRO Communications)