• Welcome Visitor! Please take a few seconds and Register for our forum. Even if you don't want to post, you can still 'Like' and react to posts.

Building Ford’s Next-Generation Autonomous Development Vehicle


Jim Oaks

Just some guy with a website
Administrator
Founder / Site Owner
Supporting Vendor
Article Contributor
TRS Banner 2010-2011
TRS Banner 2012-2015
TRS 20th Anniversary
VAGABOND
TRS Event Participant
GMRS Radio License
Joined
Aug 2, 2000
Messages
13,471
Reaction score
8,649
Points
113
Location
Nocona, Texas
Vehicle Year
1996 / 2021
Make / Model
Ford Ranger
Engine Type
4.0 V6
Engine Size
4.0 / 2.3 Ecoboost
Transmission
Automatic
2WD / 4WD
4WD
Total Lift
6-inches
Tire Size
33x12.50x15

By Chris Brewer, Chief Program Engineer, Ford Autonomous Vehicle Development

I’m proud to introduce the next-generation Ford Fusion Hybrid autonomous development vehicle.

It’s been three years since we hit the streets with our first Fusion Hybrid autonomous research vehicle, and this latest version takes everything we learned and builds on it.

This new car uses the current Ford autonomous vehicle platform, but ups the processing power with new computer hardware. Electrical controls are closer to production-ready, and adjustments to the sensor technology, including placement, allow the car to better see what’s around it. New LiDAR sensors have a sleeker design and more targeted field of vision, which enables the car to now use just two sensors rather than four, while still getting just as much data.

As we’ve discussed before, there are two main elements to creating an autonomous vehicle — the autonomous vehicle platform, which is an upgraded version of the car itself, and the virtual driver system. This new vehicle evolves both measures, particularly with regard to the development and testing of the virtual driver system, which represents a big leap in sensing and computing power.

What do we mean by virtual driver system? Well, to make fully autonomous SAE-defined level 4-capable vehicles, which do not need a driver to take control, the car must be able to perform what a human can perform behind the wheel. Our virtual driver system is designed to do just that. It is made up of:

Sensors — LiDAR, cameras and radar
Algorithms for localization and path planning
Computer vision and machine learning
Highly detailed 3D maps
Computational and electronics horsepower to make it all work

Building a car that will not be controlled by a human driver is completely different from designing a conventional vehicle, and this raises a whole new set of questions for our autonomous vehicle engineering team: How do you replicate everything a human driver does behind the wheel in a vehicle that drives itself? A simple errand to the store requires a human driver to make decisions continuously en route. Does she have the right of way? What happens if an accident or construction blocks his route?

Just as we have confidence in ourselves and other drivers, we need to develop a robust virtual driver system with the same level of dependability to make decisions, and then carry them out appropriately on the go. We’re doing that at Ford by taking a unique approach to help make our autonomous cars see, sense, think and perform like a human — in fact, better, in some cases.

How the car “sees”

I’m going to get a little technical here, so please stay with me. Based on current and anticipated technology, our engineers are working to build two methods of perception into the virtual driver system of an autonomous vehicle: mediated perception and direct perception.

Mediated perception requires the creation of high-resolution 3D maps of the environment where the autonomous car will be driving. These maps encompass everything the virtual driver system knows about the road before the car even starts driving — locations of stop signs, crosswalks, traffic signals and other static things. When out on the road, the virtual driver uses its LiDAR, radar and camera sensors to continuously scan the area around the car and compare — or mediate — what it sees against the 3D map. This allows it to precisely locate the vehicle’s position on the road, and to identify and understand what’s around it. Mediated perception also includes the system that knows the rules of the road, so it can prepare and abide by those rules.

Direct perception complements mediated perception by using the sensors to see the vehicle’s positioning on the road, as well as dynamic entities — like pedestrians, cyclists and other cars. The sensors can even help interpret hand signals, such as a police officer in the road directing traffic. Naturally, the capacity for direct perception requires even more sophisticated software and computing power to identify and classify various entities, especially pedestrians who are on the move.

This hybrid approach, incorporating both mediated and direct perception, will enable our virtual driver system to perform tasks equal to what a human driver could, and potentially, even better.

Now, let’s explore what goes into transforming a human-driven Ford Fusion Hybrid into a fully autonomous car. To keep it simple, we’ll break the virtual driver’s responsibilities into three tasks — sensing the surrounding environment, using that perception to make decisions on the road, and controlling the car.

Sensing around the car

A technician installs one of two new LiDAR sensors, each generating millions of beams to provide a 360-degree view around the car.
From the outside, our autonomous research vehicle’s sensors are the most noticeable differentiator from a conventional Fusion Hybrid. Think of them like a human’s eyes and ears.

Two hockey-puck-sized LiDAR sensors, each generating millions of beams, jut from the car’s front pillars, providing a 360-degree view. These new sensors possess a sensing range roughly the length of two football fields in every direction. High-definition LiDAR is especially well-suited for distinguishing where an object is, how big it is, and what it looks like.

Three cameras mounted on two racks are installed atop the roof. A forward-facing camera is mounted under the windshield. These cameras work to identify objects and read traffic lights on the road.

Short- and long-range radar sensors — adept at seeing through rain, fog and heavy snow — add another level of vision, helping to determine how an object is moving relative to the car.

Data from all three sensors feed into the autonomous vehicle’s brain, where the information is compared to the 3D map of the environment and other computer vision processing takes place.

Thinking and making decisions

The computer, located in the trunk, acts as the brain of the autonomous development vehicle.

Ford’s autonomous vehicle brain is located in the trunk. There, the equivalent of several high-end computers generate 1 terabyte of data an hour — more than the average person would use in mobile-phone data in 45 years.

But what really brings the computing platform to life is Ford’s virtual driver software, developed in-house.

There are a lot of considerations an autonomous car has to process on the fly: What’s around it? What are other drivers doing? Where is it going? What’s the best route? If merging into another lane, does it speed up, or slow down? What does that decision mean for other vehicles on the road?

The sophisticated algorithms our engineers write process millions of pieces of data every second, helping the autonomous vehicle to react just as it is programmed to do.
Controlling the car

Just as our brain tells the muscles in our hands and feet what to do, decisions are relayed to the autonomous vehicle controls via a network of electrical signals. This means tweaking the Fusion Hybrid’s software and, in some cases, the hardware, so that electrical commands can be sent to steering, braking, throttle and shifting systems. To ensure all of the mechanical systems perform as they are instructed requires a sophisticated network, similar to a human body’s nervous system.

Of course, additional functions require additional power — a lot of it. A standard gas-powered car doesn’t have enough electrical power for an autonomous vehicle, so we’ve had to tap into Fusion Hybrid’s high-voltage battery pack by adding a second, independent power converter to help create two sources of power to maintain robustness.
This new development vehicle brings Ford a step closer in its commitment to offer a fully autonomous vehicle in 2021 for ride-sharing and ride-hailing services. For now, the car still comes with a steering wheel and pedals — equipment our ride-sharing vehicles ultimately won’t include.

Looking ahead, we have a lot more to do. An expanded fleet is accelerating our real-world testing already taking place on the roads in Michigan, California and Arizona. We plan to grow the fleet even more, tripling its size to about 90 cars in the new year.

And you’ll start hearing more about how we’re thinking through the user experience of an autonomous ride-sharing or ride-hailing fleet vehicle. For example, we’re working on what to do if a passenger accidentally leaves items in the vehicle, or fails to close a door after exiting.

Our engineers are unrelenting in their mission to develop a robust, capable and trustworthy virtual driver system. And our next-generation autonomous vehicle is a clear step forward — one that takes us closer to the self-driving car we envision our customers will one day ride around town.

The future is coming. And we can’t wait.

 

Attachments

Last edited:

Sponsored Ad


Sponsored Ad

Staff online

Member & Vendor Upgrades

For a small yearly donation, you can support this forum and receive a 'Supporting Member' banner, or become a 'Supporting Vendor' and promote your products here. Click the banner to find out how.

Truck of The Month


Shran
April Truck of The Month

Recently Featured

Want to see your truck here? Share your photos and details in the forum.

Follow TRS On Instagram

TRS Events

25th Anniversary Sponsors

Check Out The TRS Store


Sponsored Ad


Sponsored Ad

Sponsored Ad


Amazon Deals

Top