The Year of LiDAR

2022 is clearly the year of LiDAR. 

At all of the UAS shows in the USA, Mexico, Canada, and EU, the hot topic is LiDAR in 2022, and 2023 is ramping up to be more of the same, with significant growth.

LiDAR is a “LIght Detection And Ranging” sensor, utilizing a laser, position-controlled mirror, an IMU (Inertial Measurement Unit) and internal processing to record geolocation data. 

A LiDAR sensor emits a pulse of light towards the target (ground) The light is reflected from the surface/earth (a point) and returned to the sensor. The receiver detects the returning signal and calculates the distance the light has traveled. Using the position of the sensor, mirror, IMU, the direction in which the light was sent and the distance calculated. Following this return and calculations, the 3D position where the signal was returned may be determined. With millions of reflections striking a terrestrial surface and returning to the LiDAR sensor, these contact “points” are used to generate the 3D model or ortho re-creating the target area in a digital environment.

Because LiDAR sensors generate their own signal pulse, illumination from other sources  (for example, the sun), many LiDAR operator/pilots capture data at night. As long as there is nothing interfering between sensor and surface it is therefore possible to collect data below cloud cover (or in the dark). LiDAR can offer extremely flexible access to areas requiring scans, given the ability to fly at night, or when cloud cover has a negative impact on a site where photogrammetry may not be possible due to lighting conditions. 

LiDAR sensors were previously relegated to fixed wing or rotary aircraft due to weight and cost, now accessible by any mid or heavy-lift UAS. 

The profile seen here, demonstrates the penetration capabilities of the Microdrones HR VLP16 payload. Note the greater resolution of data below trees, both broadleaf and palm. 
The above image is the author’s first experience with LiDAR; a Velodyne VLP16 with Geodetics IMU, mounted to a Yuneec H920 hexcopter.

With ever-increasing flight efficiency coupled with reduced weight and cost of LiDAR sensors, there are several aircraft and LiDAR systems available at affordable price points to suit virtually any budget. While LiDAR may not yet be for casual pilots, commercial pilots report near-immediate full ROI with LiDAR due to the current scarcity of complete systems. 

Sensors may be purchased as a complete/total solution with aircraft, support software, and payload, or owners of medium lift systems may purchase LiDAR sensors separately to mount on whatever aircraft they’re familiar and comfortable with.   For example, there are many LiDAR payloads available for the DJI Matrice 300 platform, Inspired Flight, Freefly, Yuneec, Maptek, Microdrones, and other systems.

LiDAR packages may be stand-alone, combined with separate RGB cameras for photogrammetry, or assembled with both in one housing. For example, the highly popular GeoCue 515 package not only offers a Hesai XT32 LiDAR sensor, it also includes two 20MP RGB cameras for colorizing the pointcloud, or for photogrammetry deliverables. Additionally, the system is designed for properly and precisely scaling RGB data on to the 3D pointcloud, providing not only a very accurate and precise model, but colorized, photo-realistic data for engineers, surveyors, construction teams, graphic designers, game designers, etc. 

Pilots, engineers, program managers, surveyors will want to consider several factors when choosing a LiDAR payload for purchase or rent.

  • Cost
  • Penetration
  • Resolution
  • Software cost/flexibility
  • Difficulty of operation

Different sensors will yield different results. Below are examples from the DJI L1, the Velodyne VLP16 (Microdrones HR), Hesai Pandar XT32 , and the Reigl Vux1 sensors. Profiles/cross sections captured from LP360 illustrate the surface data from the various sensors, and is a confident method of displaying vegetation penetration. 

DJI L1

Pictured above, the DJI L1 is incapable of any effective penetration through vegetation or other porous areas. Additionally, strip alignment may be challenging in some scenarios. This data was captured, initially processed in DJI Terra, and finish processed in GeoCue LP360

Microdrones MD1000HR (VLP16)

The profile seen here, demonstrates the penetration capabilities of the Microdrones HR VLP16 payload. Note the greater resolution of data below trees, both broadleaf and palm.

GeoCue 515

In this image, there are no gaps beneath the trees. In the center, a uniform depression is visible. The Hesai Pandar XT32 was able to “see” below shallow water surface. In this case, approximately 12” of water depth, yet the creek bottom is solid (visible). While the below-water data is not viable for measurement, it does provide greater data for engineering considerations. 

REIGL VUX1

These two illustrations are sourced from the Riegl Vux1 sensor. This sensor provides the highest resolution of all four images compared here, with a much higher price tag to match the image quality. Note in the zoomed in profile, train rails/tracks are not only visible, but accurately measurable. There are no holes in the surface beneath any of the trees, and the tree detail is enough to classify tree types.
 

“Penetrating vegetation is a key function of LiDAR sensors; this is why tree profiles/slices have been used to illustrate these challenging scenarios.”

WHAT ABOUT SOLID STATE LiDAR SYSTEMS?

It is worth noting that solid state LiDAR systems are on the rise, and very much in development for longer-range with high density. Technology hasn’t improved to a point where solid state LiDAR might be broadly applicable for UAS work, while the technology has proved promising due to lighter weight, less power consumption, and speed. However, development is heavily focused on autonomous vehicles at present, yet it is fully anticipated we’ll soon see solid state LiDAR available for aerial applications.

HOW IS LiDAR DIFFERENT FROM PHOTOGRAMMETRY?

Photogrammetry uses multiple images with embedded geodata, matching pixels, and data information to create an orthomosiac. Pointclouds can be derived from images with slightly less accuracy, but a significant time commitment.  A 50 acre field processed as a pointcloud derived from photos may take up to 12 hours on an average computer, while the same computer will process the LiDAR-sourced pointcloud in under 30 minutes.  LiDAR is significantly faster to fly than UAS designed for photogrammetry, as the need for deep overlap is lessened in LiDAR workflow. 

Additionally, LiDAR may be flown at night (unless colorization is needed) while photogrammetry requires daylight hours. 

On the other hand, photogrammetry missions may be flown while there is water on the ground after a flood or heavy precipitation. LiDAR works best in dry, non-reflective environments.  Mirrored windows, water reflecting on leaves, ponds, creeks, etc will display as blacked-out areas in a LiDAR scan.

In this scan of the Colorado River, areas containing water display as black.

Not all software applications are compatible with all the different LiDAR sensors. The way trajectories are read/displayed, how data is managed/handled, even basic features are very different between the various software tools available today. For example, until recently, the data from DJI’s L1 LiDAR system could only be initially processed in DJI Terra software, which is quite limited, and many feel is “kludgy and slow.”  It’s also not a platform known for being stable. 

Recently, GeoCue has added the DJI L1 to it’s compatibility platform, enabling DJI users to use the LP360 software with L1 data, with great stability, flexibility, and speed.

SOFTWARE

When choosing a LiDAR system, there are many considerations, the greatest of which is how important high resolution and precision at ground will be to projects/workflows. Budget frequently makes this determination. However, bottom line vs long-term needs are often at odds with each other; it’s wise to spend “up” to a higher grade LiDAR sensor when customer satisfaction is at the top of the list.  Research often requires higher grade sensors as well.

When choosing a LiDAR system, consider the aircraft carrying the payload, the software required to process the data, and consider flight times as well. Two hours flying a narrow beam sensor vs 30 minutes of a wider throw may make all the difference, particularly when the company has a deep backlog and is focused on efficiency.

Whether LiDAR an organization is ready for LiDAR now, or down the road, there has never been a better time to learn more about LiDAR, pointclouds, and the differences of data processing from photogrammetry workflows. 

Special thanks to Brady Reisch of KukerRanken for the profile slices of data.

Does the Drone Industry Really Need 8K

Does the Drone Industry Really Need 8K?

Pro Read: As a leak indicates that Autel Robotics may be the first to offer a 6/8K camera on a drone, UAS expert and industry leader Douglas Spotted Eagle dives in to what the advantages of 8k may be – and if the drone industry is ready to take advantage of them.

Guest post by Douglas Spotted Eagle, Chief Strategy Officer – KukerRanken

In 2004, Sony released the world’s first low-cost HD camera, known as the HVR-Z1U. The camera featured a standard 1/3” imager, squeezing 1440×1080 pixels (anamorphic/non-square) pixels on to the sensor. This was also the world’s first pro-sumer camera using the MPEG2 compression scheme, with a color sample of 4:2:0, and using a GOP method of frame progression, this new technology set the stage for much higher resolutions and eventually, greater frame rates.

It’s “father,” was the CineAlta HDWF900, which offered three 2/3” CCDs, which was the industry standard for filmmaking for several years, capturing big hits such as the “Star Wars Prequel Trilogy”, “Once Upon a Time in Mexico”, “Real Steel”, “Tomorrowland”, “Avatar”, “Spykids” (1 & 2), and so many others.  The newer HDV format spawned from similar technology found in the HDWF900, and set the stage for extremely high end camera tech to trickle down into the pro-sumer space.

Overtime, camera engineers identified methods of co-siting more pixels on small imagers, binning pixels, or using other techniques to increase the capture resolution on small surfaces. Compression engineers have developed new compression schemes which brought forward AVC (h.263), MP4(h.264), and now HEVC/High Efficiency Video Codec(h.265), and still others soon to be revealed.

Which brings us to the present.

We have to roughly quadruplemegapixels to doubleresolution, so the jump from SD to HD makes sense, while the jump from HD to UHD/4K makes even more sense. Following that theme, jumping to 6K makes sense, while jumping to 8K is perfect theory, and nears the maximum of the human eye’s ability to resolve information.

At NAB 2018, Sony and Blackmagic Design both revealed 8K cameras and in that time frame others have followed suit.

During CommUAV and InterDrone, several folks asked for my opinion on 6 and 8K resolutions. Nearly all were shocked as I expressed enthusiasm for the format.

–          “It’s impossible to edit.”

–          “The files are huge.”

–          “No computer can manage it.”

–          “There is no where to show 8K footage.”

–          “Human eyes can’t resolve that resolution unless sitting very far away from the screen.”

–          “Data cards aren’t fast enough.”

And….so on.

These are all the same comments heard as we predicted the tempo of the camera industry transitioning from SD to HD, and from HD to 4K.  In other words, we’ve been here before.

Video cameras are acquisition devices. For the same reasons major motion pictures are acquired at the highest possible resolutions, and for the same reasons photographers get very excited as resolutions on-camera increase, so should UAS photographers. Greater resolution doesn’t always mean higher grade images. Nor does larger sensor sizes increase quality of images. On the whole, higher resolution systems usually does translate into higher quality images.

Sensor sizes are somewhat important to this discussion, yet not entirely critical. The camera industry has been packing more and more pixels into the same physical space for nearly two decades, without the feared increase in noise. Additionally, better noise-sampling/reduction algorithms, particularly from OEM’s like Sony and Ambarella, have allowed far greater reduction in noise compared to the past. Cameras such as the Sony A7RIV and earlier offer nearly noise-free ISO of 32,000!

Sensor sizes vary of course, but we’ll find most UAS utilize the 1/2.3, or the 1” sensor. (Light Blue and Turquoise sizes respectively, as seen below).

“Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust, pitting, spalling, or other damage which might be missed with lower resolutions or the human eye become apparent with a greater resolution.”

WHY DOES HIGHER RESOLUTION TRANSLATE TO BETTER FINISHED PRODUCT?

Generally, we’re downsampling video or photos to smaller delivery vehicles, for but one reason. In broadcast, 4:2:2 uncompressed color schemes were the grail. Yet, most UAS cameras capture a 4:2:0 color/chroma sample.  However, a 4K capture, downsampled to 1080 at delivery, offers videographers the same “grail” color schema of 4:2:2!

As we move into 6 or 8K, similar results occur. 8K downconverted to HD offers a 4:4:4 color sample.

CROPPING

We gain the ability to crop for post editing/delivery to recompose images without fear of losing resolution. This means that although the aircraft may shoot a wide shot, the image may be recomposed to a tighter image in post, so long as the delivery is smaller than the source/acquisition capture. For example, shooting 4K for 1080 delivery means that up to 75% of the image may be cropped without resolution loss.

As the image above demonstrates, it’s quite possible to edit 8K HEVC streams on a newer laptop. Performance is not optimal without a great deal of RAM and a good video card, as HEVC requires a fair amount of horsepower to decode. The greater point, is that we can edit images with deep recomposition. Moreover, we have more pixels to work with, providing greater color correction, color timing, and depth/saturation.

For public safety, this is priceless. An 8K capture provides great ability to zoom/crop deeply into a scene and deliver much greater detail in HD or 4K delivery.

The same can be said for inspections, construction progress reports, etc. Users can capture at a high resolution and deliver in a lower resolution.

Another benefit of 6 and 8K resolutions is the increase in dynamic range. While small sensors only provide a small increase in dynamic range, a small increase is preferable to no increase.

To address other statements about 6K and 8K resolutions; They human eye has the ability to see around 40megapixels, age-dependent. 8K is approximately 33megapixels. However, the human eye doesn’t see equal resolutions across the surface. The center of our eye sees approximately 8megapixels, where the outer edges are not as deep. High resolution does provide greater smoothing across the spectrum, therefore our eyes see smoother moving pictures.

BEYOND THE HUMAN EYE

Going well-beyond the human eye, higher resolutions are applicable to “computer vision,” benefiting mapping, 3D modeling, and other similar applications. Generally speaking, more pixels equals greater smoothness and geometry. As technology moves deeper into Artificial Intelligence, higher resolutions with more efficient codecs become yet even more important. Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust or other damage which might be missed with lower resolutions or the human eye become more visible with a greater resolution. Now imagine that greater resolution providing input to an AI-aided inspection report that might notify the operator or manager of any problem. Our technology is moving beyond the resolution of the human eye for good reason.

DATA STORAGE

Files from a 6 or 8K camera are relatively small, particularly when compared to uncompressed 8K content (9.62TB per hour). Compression formats, known as “Codecs” have been improving for years, steadily moving forward. For example, when compressions first debuted in physical form, we saw Hollywood movies delivered on DVD. Then we saw HD delivered on Blu-ray. Delivery over disc formats is dead, and now we’ve moved through MPG2, AVC, AVCHD, H.264, and now H.265/HEVC. In the near future we’ll see yet even more compression schemes benefitting our workflows whether delivered via streaming or thumbdrive.  VVC or “Versatile Video Codec”will be the next big thing in codecs for 8K, scheduled to launch early 2022.

Unconventional h.264 and H.265/HEVCare currently being used as delivery codecs for compressed 6 and 8K streams. 8K has been successfully broadcast (in testing environments) at rates as low as 35Mbps for VOD, while NHK has set the standard at 100Mbps for conventional delivery. Using these codecs, downconverting streams to view OTA/Over The Air to tablets, smartphones, or ground station controllers is already possible. It’s unlikely we’ll see 8K streaming from the UAS to the GSC.

U3 Datacards are certainly prepared for 6 and 8K resolutions/datastreams; compression is what makes this possible.  The KenDao 8K and Insta 8K 360 cameras both are recording to U3 cards, available in the market today.

It will be some time before the average consumer will be seeing 8K on screens in their homes. However, 8K delivered for advertising, matching large format footage being shot on Weapon, Monstro, Helium or other camera formats may be less time-consuming when using 8K, even from smaller camera formats carried on an UAS (these cameras may easily be carried on heavy-lift UAS).

Professional UAS pilots will benefit greatly from 5, 6, or 8K cameras, and should not be shy about testing the format. Yes, it’s yet another paradigm shift in an always-fluid era of aerial and visual technology.  There can be no doubt that these higher resolutions provide higher quality in any final product.  Be prepared; 2020 is the year of 5, 6, and 8K cameras on the flying tripods we’re using for our professional and personal endeavors, and I for one, am looking forward to it with great enthusiasm.

Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation

Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation. Here’s What Happened.

At Commercial UAV Expo in Las Vegas, more than 300 drone industry professionals watched as experts tested four different drone mapping solutions for crime scene investigation at night.

Guest post by Douglas Spotted Eagle, Chief Strategy Officer at KukerRanken

Commercial UAV Expo brought UAS professionals, developers, manufacturers, first responders, and related industries under one roof for the first time in nearly two years. Due to the pandemic, the show was less attended than previous years, yet provided robust live demonstrations, night flight, daytime seminars, panels, and case studies for the relatively large audience. There was a strong buzz amongst the crowd about being at an in-person event, and experiencing face to face communication for the first time in many months.

In addition to the “Beyond the Cage” Live Drone Demo Day that launched Commercial UAV 2021, produced by Sundance Media Group, Wednesday night provided attendees with a glimpse of the Crime Scene Investigator tools function in the dark hours. Sundance Media Group developed this methodology several years ago at the request of a law enforcement agency and has been presenting this methodology at academies, colleges, universities, and tradeshows since 2017, with a variety of aircraft including DJI Mavic, Phantom 4, Yuneec H520, Skydio, and Autel EVO series (versions 1 and 2). All successfully output data, excepting Skydio, which struggles with brightly lit events in surrounding darkness.

Presented by FoxFury, Sundance Media Group, Autel, and Pix4D, this event also invited SkyeBrowse to participate in the demonstration, showing the effectiveness and speed of their application.

Testing Drone Mapping Solutions for Crime Scene Investigation: Setting the Scene

With a model covered in moulage, mock slit throat, and blood trail on the ground, the demonstration began with the multi-vendor team led by Brady Reisch, Bryan Worthen of Kuker-Ranken, Todd Henderson and Patrick Harris of SMG,  and David Martel.  The team  placed four FoxFury T56 lighting systems at specific, measured points in the scene, supplemented by FoxFury NOW  lanterns and Rugo lighting to fill in holes and eliminate shadows.

Douglas Spotted Eagle of SMG and KukerRanken emcee’d the event through the two flights.

Douglas Spotted Eagle addresses the crowd of 300 persons

SkyeBrowse had the first flight, with its one-button capture. Brady Reisch set up the mission, with input from the SkyeBrowse developer instructing the exposure levels of the camera for the SkyeBrowse video mission. Once the mission was completed, the photos were uploaded to the SkyeBrowse website, where results were found approximately 30 minutes following the flight.

Brady Reisch of KukerRanken sets up the Skybrowse mission with Bobby Ouyang of Skybrowse

The Autel EVO II Pro was programmed on-site for an automated Skybrowse mission and the demonstration began. The area is highly congested with palm trees and buildings enclosing the small rotunda in front of the Mirage Hotel Convention Center.

Brady Reisch flew the second EVO II  mission manually, in much the same configuration as though the aircraft had flown a double-grid mission, supplemented by high-altitude orbit, coupled with manually captured orbit and select placements. Because of the crowd, time was a consideration. In an actual homicide scene, more low-placed images would have been captured.

Brady Reisch monitors time as Pix4DReact rapid-renders the scene (60 seconds)

The mission photos were uploaded to Pix4dReact on-scene and rendered while the audience observed, requiring approximately 60 seconds to output an ortho-rectified, 2D image, complete with evidence markers/tags, and PDF supplemental report output. Also loaded were the photo images into Pix4D and Leica Infinity software packages, to be rendered for 3D viewing once the show floor opened on Thursday. Pix4DReact is a two-dimensional, rapid-mapping solution, so there is no 3D view.

The four screen captures tell the rest of the story, and readers can determine for themselves what each software is capable of providing.  One point of interest is that there were many claims of “guaranteed 1cm of precision regardless of flight area,” which has yet to be verified. The Kuker-Ranken team will be re-flying a mission with two separate GPS systems (Leica and Emlid) to verify the claims of precision.

Precision is Repeatable

Precision is repeatable. Accuracy is the degree of closeness to true value. Precision is the degree to which an instrument or process will repeat the same value. In other words, accuracy is the degree of veracity while precision is the degree of reproducibility. With a base station, NTRIP, Spydernet, PPK, or RTK workflow, precision is always the goal, well-beyond accuracy. This is a relatively new discussion in the use of unmanned aircraft, and although the topic seems simple enough, complexity holds challenges not easily dismissed by inexperience or lacking education and practice.  We are fortunate to have a partner in Kuker-Ranken, providing precision tools to the survey, forensic, civil engineering, and AEC industries since 1928. The KR team includes PLS’, EIT, and other accredited precision professionals, rarely found in the UAS industry.

Precision is critical for surveyors, civil engineers, forensic analysts and investigators, construction sites, mapping, agriculture, and other verticals in the UAS industry, and this sort of scene is no exception. Being able to properly place a map or model into a coordinate is necessary for many professional pilots in the UAV field, and while this mission is not precise to coordinate, it is precise within itself, or in other words, measurements will be accurate in the image, while being imprecise to the overall location.

We’ll dive more deeply into precision in a future article. For purposes of this exercise, we’re more interested in accuracy of content in the scene, and all four outputs were similar in accuracy within the scene itself. In other words, distances, volumes, and angles may be measured point to point. Pix4DReact is not as accurate as the other three tools, as it’s not intended to be a deeply accurate application given speed of output.

Output Results of Drone Mapping Solutions

Output #1: SkyeBrowse (processing time, approximately 35 minutes)

Output #2: Pix4Dreact (processing time, approximately 1 minute)

drone mapping solution Pix4Dreact

Output #3: Pix4Dmapper (processing time, approximately 2.5 hours)

drone mapping solutions Pix4Dmapper

Output #4: Leica Infinity (processing time, approximately 2 hours, 50 minutes)

drone mapping solutions Leica Infinity

Agencies who would like access to this data are invited to contact Brady Reisch, VDC Specialist at Kuker-Ranken.

Selecting the Right Drone for Your Construction Business

Selecting the Right Drone for Your Construction Business

Douglas Spotted Eagle and Brady Reisch headed into the field to collect aerial construction data over fourteen weeks with three different drones.  Their goal was to determine which drone was best for the construction job site.

They used three popular aircraft for the comparisons and the results were pretty surprising.   

Drones Compared:

Unmanned Aircraft (UA/Drones) have rapidly become a significant component of the modern construction industry workflow whether it’s for progress reporting, site planning, BIM, inventory control, safety awareness, structure inspection, topo’s, or other purposes. Site supervisors, architects, and stakeholders all benefit from the rapid output of accurate 2D/Ortho, or 3D models that may be used for purposes ranging from simple visualizations, progress reporting, stockpile calculations, DSM, contours, to more complex overlaying blue-prints in the As-Designed/As-Built or BIM process.

Choosing the right aerial asset/UA may be challenging, particularly as the marketing of many UA is focused on RTK built in (rarely accurate) PPK solutions and a many component workflow versus others that are single-step workflows. Decisions on aircraft choices will be made based on budget, accuracy requirements, speed to result, and overall reporting requirements.

On any site flown for BIM, input to AutoDesk or similar tools, having accurate ground control points (GCP) is required. GCP’s may be obtained from the site surveyor, county plat, or other official sources, and this is often the best method assuming that the ground control points may be identified via UA flight-captured images. Site supervisors may also capture their own points using common survey tools. Devices such as the DTResearch 301 RTK tablet may be used to augment accuracy, combining GPC location points from the air and on the ground. Failing these methods, site supervisors can capture their own points based on the specific needs of the site. These points may be calculated via traditional rover/base RTK systems, or using PPK, RTK, or PPP solutions, again being budget and time dependent. If centimeter (vs decimeter) accuracy is required, RTK or PPK are necessary.

Putting accuracy aside, image quality is gaining importance as stakeholders have become accustomed to photo-grade ortho or models. Oftentimes, these models are used to share growth with inspectors as well, which means having presentation-grade images may be critical. Image quality is high priority when generating pre-development topos, or simply illustrating a tract of land from all directions. In other words, a high-quality imaging sensor (camera) is a necessity. Some aircraft allow user-choice cameras, while many UA manufacturers are creating cameras specific to their aircraft design.

Turning to aircraft, we chose three popular aircraft for the comparisons:

Flying the site several times in various conditions, the same RTK capture points are used in all three mapping projects. The DTResearch 301 RTK system is used to capture GCP on-location, with Hoodman GCP kit as the on-ground GCP. The Hoodman SkyRuler system was also captured as a scale-constraint checkpoint.

This commercial site is small in size (1.64 acres), and one we were able to begin capturing prior to forms being laid, all the way to vertical installation.

Accuracy varied greatly with each aircraft system, particularly in elevation calculations. Deviations are from projected points vs the GCP points obtained through a surveyor’s RTK system.
Overall (and to our surprise), the Autel EVO was most accurate with a deviation of:

  • x-5.112ft
  • y-47.827ft
  • z-16.541ft 

The Yuneec H520/E90 combo was not far behind with a deviation of:

  • X-10.323ft
  • y-44.225ft
  • z-92.788ft

Finally, the DJI Phantom 4 presented deviations of:

  • x-1.95ft
  • y-45.565ft
  • z-140.626ft 

All of these deviations are calculated and compensated for in Pix4DMapper, which is used to assemble all of these week-to-week projects.
As 3D modelling was part of the comparison/goal, obliques were flown in addition to nadir captures. While manual settings are often essential for high quality maps and models, in the following images cameras were all set to automatic exposure, shutter, ISO.

It is important to remember that these are NOT corrected via network nor base station. This is autonomous flight, localized in Pix4D.

MODELS

AUTEL EVO (Original version)
YUNEEC H520/E90
PHANTOM 4 PRO

All aircraft models work well with Pix4DMapper, although at the time of this writing, Pix4D has not created lens profiles for the Autel EVO (they have indicated this feature should be available “soon”). We custom-sized the lens profile ourselves, based on information provided by Autel’s product managers. *as of 2.1.22, Pix4D has generated lens profiles for both Autel EVO and EVO II aircraft.

Orthos

AUTEL EVO
YUNEEC H520/E90
PHANTOM 4 PRO

Although image quality is subjective, our client and our team all agree the Autel EVO provides the best image quality and color of all aircraft, with all aircraft set to automatic exposure, shutters peed, and ISO of 100. This is a surprise, given the Autel is a ½.3 imager, vs the 1” rolling shutter of Yuneec and global shutter of the DJI aircraft. Based on internet forums, Autel is very well known for their camera parameters being impressive.

All flights are single-battery flights. This is important, as changing batteries offers different functions for the various aircraft. Using Yuneec and DJI products and their respective software applications, we are able to fly larger sites with proper battery management with the aircraft returning to launch point when a battery is depleted and resume a mission where it left off once a fresh/charged battery is inserted. The Autel mission planner currently does not support multi-battery missions (although we’re told it will soon do so).

There are a few aspects to this workflow that are appreciated and some that are not. For example, when flying Autel and Yuneec products, we’re able to act as responsible pilots operating under our area wide Class B authorization provided by the FAA. To fly the DJI Phantom, the aircraft requires a DJI-provided unlock that permits flights. It’s a small annoyance, yet if one shows up on a jobsite not anticipating an unlock, it can be tedious. In some instances, we are just on the edge and outside controlled airspace, yet DJI’s extremely conservative system still requires an unlock. Most times, the unlock is very fast; other times, it doesn’t happen at all.

All three aircraft are reasonably fast to deploy, and this is important when a LAANC request for a zero-altitude grid is a short window. Autel clearly wins the prize for rapid deployment, with the EVO taking approximately 30 seconds to launch from case-open to in-the-air. Mission planning may be managed prior to flight and uploaded once the UA has left the ground. We are experiencing much the same with the latest release of the EVO II 1” camera as well. We also appreciated the lack of drift and angle in relatively high winds (26mph+).

DJI is next fastest at approximately three minutes, (assuming propellers remain attached in the case), while the mission planning aspect is a bit slower than the Autel system. DJI uploads the mission to the aircraft prior to launch. Of course, this is assuming we’ve already achieved an approval from DJI to fly in the restricted airspace, on top of the FAA blanket approval. If we don’t, we may find (and have found) ourselves unable to fly once on-site, due to glitches or slow response from DJI.

Yuneec is the slowest to deploy, given six props that must be detached for transport. Powering the ST16 Controller, attaching props, and waiting for GPS lock often requires up to five minutes. The mission planning tool (DataPilot) is significantly more robust than DJI’s GSPro, third party Litchi or other planning apps, and is far more robust than Autel Explorer’s mission planner. DataPilot also essentially ensures the mission will fly correctly, as it auto-sets the camera angle for different types of flight, reducing the margin for pilot error. The Yuneec H520 is superior in high winds, holding accurate position in reasonably high winds nearing 30mph.

Advertise with Us ›

All three aircraft turn out very usable models. All aircraft capture very usable, high-quality images. All of the aircraft are, within reason, accurate to ground points prior to being tied to GCP.

We were surprised to find we prefer the Autel EVO and are now completing this project after having acquired an Autel EVO II Pro with a 1” camera and 6K video.

Why?

Foremost, the Autel EVO family offered the most accurate positioning compared to the other aircraft in the many, many missions flown over this site. With dozens of comparison datasets, the Autel also offered the fastest deployment, and ability to fly well in high winds when necessary. The cost of the Autel EVO and EVO II Pro make this an exceptionally accessible tool and entirely reliable. That the Autel EVO requires no authorization from an overseas company, particularly in areas where we already have authorizations from the FAA, is significant to us, and the image quality is superior to either of the other aircraft.

We also greatly appreciate the small size of the aircraft, as it takes little space in our work truck, and our clients appreciate that we’re not invasive when working residential areas for them. The aircraft isn’t nearly as noisy as other aircraft, resulting in fewer people paying attention to the UA on the jobsite. The bright orange color, coupled with our FoxFury D3060 light kit (used even in daylight) assists in being able to see the aircraft quite easily, even when up against a white sky or dark building background.

We also of course, appreciate the speed in deployment. With safety checks, LAANC authorizations, planning a mission, and powering on remote and aircraft, the Autel EVO is deployable in under two minutes. When flying in G airspace, from case to airborne can be accomplished in under 30 seconds.

Battery life on the EVO 1 is substantial at 25 minutes, while our newly acquired EVO II Pro offers 40 minutes of flight time with incredible images to feed into Pix4D or other post-flight analytics software.

Of greatest importance, the EVO provides the most accurate XYZ location in-flight compared to the other aircraft. For those not using GPS systems such as the DTResearch 301 that we’re using on this project, accuracy is critical, and being able to ensure clean capture with accurate metadata is the key to successful mapping for input to Autocad applications.

WHERE TO LEARN MORE:

www.autel.com (UA, mission planning)

www.dtresearch.com (RTK Tablet with hyper-accurate antenna system)

www.dji.com (UA, mission planning)

www.foxfury.com (Lighting system for visualization)

www.hoodman.com (GCP, LaunchPad, SkyRuler)

www.Pix4D.com (Post-flight mapping/modelling software)

www.sundancemediagroup.com (training for mapping, Pix4D, public safety forensic capture)

www.yuneec.com/commercial (UA, mission planning)

With thanks to AutelHoodmanDTResearch, and Pix4D.