The Year of LiDAR

2022 is clearly the year of LiDAR. 

At all of the UAS shows in the USA, Mexico, Canada, and EU, the hot topic is LiDAR in 2022, and 2023 is ramping up to be more of the same, with significant growth.

LiDAR is a “LIght Detection And Ranging” sensor, utilizing a laser, position-controlled mirror, an IMU (Inertial Measurement Unit) and internal processing to record geolocation data. 

A LiDAR sensor emits a pulse of light towards the target (ground) The light is reflected from the surface/earth (a point) and returned to the sensor. The receiver detects the returning signal and calculates the distance the light has traveled. Using the position of the sensor, mirror, IMU, the direction in which the light was sent and the distance calculated. Following this return and calculations, the 3D position where the signal was returned may be determined. With millions of reflections striking a terrestrial surface and returning to the LiDAR sensor, these contact “points” are used to generate the 3D model or ortho re-creating the target area in a digital environment.

Because LiDAR sensors generate their own signal pulse, illumination from other sources  (for example, the sun), many LiDAR operator/pilots capture data at night. As long as there is nothing interfering between sensor and surface it is therefore possible to collect data below cloud cover (or in the dark). LiDAR can offer extremely flexible access to areas requiring scans, given the ability to fly at night, or when cloud cover has a negative impact on a site where photogrammetry may not be possible due to lighting conditions. 

LiDAR sensors were previously relegated to fixed wing or rotary aircraft due to weight and cost, now accessible by any mid or heavy-lift UAS. 

The profile seen here, demonstrates the penetration capabilities of the Microdrones HR VLP16 payload. Note the greater resolution of data below trees, both broadleaf and palm. 
The above image is the author’s first experience with LiDAR; a Velodyne VLP16 with Geodetics IMU, mounted to a Yuneec H920 hexcopter.

With ever-increasing flight efficiency coupled with reduced weight and cost of LiDAR sensors, there are several aircraft and LiDAR systems available at affordable price points to suit virtually any budget. While LiDAR may not yet be for casual pilots, commercial pilots report near-immediate full ROI with LiDAR due to the current scarcity of complete systems. 

Sensors may be purchased as a complete/total solution with aircraft, support software, and payload, or owners of medium lift systems may purchase LiDAR sensors separately to mount on whatever aircraft they’re familiar and comfortable with.   For example, there are many LiDAR payloads available for the DJI Matrice 300 platform, Inspired Flight, Freefly, Yuneec, Maptek, Microdrones, and other systems.

LiDAR packages may be stand-alone, combined with separate RGB cameras for photogrammetry, or assembled with both in one housing. For example, the highly popular GeoCue 515 package not only offers a Hesai XT32 LiDAR sensor, it also includes two 20MP RGB cameras for colorizing the pointcloud, or for photogrammetry deliverables. Additionally, the system is designed for properly and precisely scaling RGB data on to the 3D pointcloud, providing not only a very accurate and precise model, but colorized, photo-realistic data for engineers, surveyors, construction teams, graphic designers, game designers, etc. 

Pilots, engineers, program managers, surveyors will want to consider several factors when choosing a LiDAR payload for purchase or rent.

  • Cost
  • Penetration
  • Resolution
  • Software cost/flexibility
  • Difficulty of operation

Different sensors will yield different results. Below are examples from the DJI L1, the Velodyne VLP16 (Microdrones HR), Hesai Pandar XT32 , and the Reigl Vux1 sensors. Profiles/cross sections captured from LP360 illustrate the surface data from the various sensors, and is a confident method of displaying vegetation penetration. 

DJI L1

Pictured above, the DJI L1 is incapable of any effective penetration through vegetation or other porous areas. Additionally, strip alignment may be challenging in some scenarios. This data was captured, initially processed in DJI Terra, and finish processed in GeoCue LP360

Microdrones MD1000HR (VLP16)

The profile seen here, demonstrates the penetration capabilities of the Microdrones HR VLP16 payload. Note the greater resolution of data below trees, both broadleaf and palm.

GeoCue 515

In this image, there are no gaps beneath the trees. In the center, a uniform depression is visible. The Hesai Pandar XT32 was able to “see” below shallow water surface. In this case, approximately 12” of water depth, yet the creek bottom is solid (visible). While the below-water data is not viable for measurement, it does provide greater data for engineering considerations. 

REIGL VUX1

These two illustrations are sourced from the Riegl Vux1 sensor. This sensor provides the highest resolution of all four images compared here, with a much higher price tag to match the image quality. Note in the zoomed in profile, train rails/tracks are not only visible, but accurately measurable. There are no holes in the surface beneath any of the trees, and the tree detail is enough to classify tree types.
 

“Penetrating vegetation is a key function of LiDAR sensors; this is why tree profiles/slices have been used to illustrate these challenging scenarios.”

WHAT ABOUT SOLID STATE LiDAR SYSTEMS?

It is worth noting that solid state LiDAR systems are on the rise, and very much in development for longer-range with high density. Technology hasn’t improved to a point where solid state LiDAR might be broadly applicable for UAS work, while the technology has proved promising due to lighter weight, less power consumption, and speed. However, development is heavily focused on autonomous vehicles at present, yet it is fully anticipated we’ll soon see solid state LiDAR available for aerial applications.

HOW IS LiDAR DIFFERENT FROM PHOTOGRAMMETRY?

Photogrammetry uses multiple images with embedded geodata, matching pixels, and data information to create an orthomosiac. Pointclouds can be derived from images with slightly less accuracy, but a significant time commitment.  A 50 acre field processed as a pointcloud derived from photos may take up to 12 hours on an average computer, while the same computer will process the LiDAR-sourced pointcloud in under 30 minutes.  LiDAR is significantly faster to fly than UAS designed for photogrammetry, as the need for deep overlap is lessened in LiDAR workflow. 

Additionally, LiDAR may be flown at night (unless colorization is needed) while photogrammetry requires daylight hours. 

On the other hand, photogrammetry missions may be flown while there is water on the ground after a flood or heavy precipitation. LiDAR works best in dry, non-reflective environments.  Mirrored windows, water reflecting on leaves, ponds, creeks, etc will display as blacked-out areas in a LiDAR scan.

In this scan of the Colorado River, areas containing water display as black.

Not all software applications are compatible with all the different LiDAR sensors. The way trajectories are read/displayed, how data is managed/handled, even basic features are very different between the various software tools available today. For example, until recently, the data from DJI’s L1 LiDAR system could only be initially processed in DJI Terra software, which is quite limited, and many feel is “kludgy and slow.”  It’s also not a platform known for being stable. 

Recently, GeoCue has added the DJI L1 to it’s compatibility platform, enabling DJI users to use the LP360 software with L1 data, with great stability, flexibility, and speed.

SOFTWARE

When choosing a LiDAR system, there are many considerations, the greatest of which is how important high resolution and precision at ground will be to projects/workflows. Budget frequently makes this determination. However, bottom line vs long-term needs are often at odds with each other; it’s wise to spend “up” to a higher grade LiDAR sensor when customer satisfaction is at the top of the list.  Research often requires higher grade sensors as well.

When choosing a LiDAR system, consider the aircraft carrying the payload, the software required to process the data, and consider flight times as well. Two hours flying a narrow beam sensor vs 30 minutes of a wider throw may make all the difference, particularly when the company has a deep backlog and is focused on efficiency.

Whether LiDAR an organization is ready for LiDAR now, or down the road, there has never been a better time to learn more about LiDAR, pointclouds, and the differences of data processing from photogrammetry workflows. 

Special thanks to Brady Reisch of KukerRanken for the profile slices of data.

CONTROLLABILITY CHECKS REQUIRED FOR UAS FLIGHT

Whether flying for the jobsite or flying for fun and family enjoyment, safety is always paramount with UAS (drones). There are the Federal Aviation Administration (FAA) regulations, also known as the FARs, and then there are the “Common-Sense” operational actions that all pilots, regardless of why the drone is flying, should be enacting prior to every flight.

Pre-flight checks aren’t just good practice; they are required by federal law under Part 107, section 49. These rules apply regardless of the reason for the flight (recreational or commercial).

Controllability checks address two of the five preflight requirements for drone flight. Prior to flight, pilots should be checking;

  • Weather Conditions (Winds, rain, heat, forecasts)
  • Environmental conditions (people, pets, buildings, trees, other potential threats/chellenges)
  • Physical state of the drone (props, batteries, fuselage, attached devices)
  • Any local laws or requirements (not all municipalities allow for drone takeoff/landing on public property)
  • FAA Airspace authorizations (Waivers, LAANC, etc)

This article focuses on controllability checks, which require less time to perform than the time you’ve already spent reading thus far! They are fast, efficient, and just plain best-practices for any drone mission or plan.

WHAT IS A CONTROLLABILITY CHECK?

A Controllability check (logged as a C/C) provides pilots of an immediate awareness of the aircraft’s readiness for flight, and a means of ensuring all surfaces (props/hull/camera/telemetry/communications) are functioning to expectation and safety requirements.

HOW TO PERFORM A CONTROLLABILITY CHECK

After physically inspecting the props for cracks, ensuring any payload is securely attached to the hull, checking landing gear, motor arms, battery security/locked into body of UAS, and clearing the launch/land area, the aircraft is now ready for flight.

Launch the drone. We recommend launching to an altitude of approximately 12 feet, or higher than the tallest nearby human. Some pilots prefer to launch to an altitude of 2-3 feet, which suffices for controllability check, but does put the aircraft at a level that may harm humans in the event of an aircraft with flight control issues.

Once the drone is airborne and at a safe altitude, we’ll perform a controllability check.

The left stick was already used to launch the drone.

Using the left stick, test the YAW functions on the aircraft. Only the left stick should provide input to the aircraft. Pushing the stick LEFT (9 o’clock position) should cause the aircraft to rotate left. Next, do the same with the left stick pushed to the right. Again, the aircraft should rotate (YAW) to the right. 

 

Next, push the stick down. This should cause the aircraft to descend. Be sure the aircraft remains at a level that is above the heads of anyone in the immediate area. As mentioned previously, we typically perform these maneuvers at approximately 12’/4 meters.

 

Generally, we begin the controllability check with the left stick, as there is no dimensional movement through use of the left stick; the aircraft hovers in one place and moves within the “column” directly above the launch/land pad. Starting with the left stick is a small nod to the increased risk of horizontal/dimensional movement.

Now that we’re certain the left stick is controlling the aircraft as expected, we’ll move to the right stick. 

The right stick is a bit more dynamic, as it controls all horizontal movement of the aircraft, and the aircraft will move forward, backward, roll right, and roll left. 

We recommend that the first right stick movement is forward flight, to push the aircraft away from the (assuming pilot is slightly to the right/left and behind the aircraft) pilot, providing a safety buffer in the event of a radio problem. 

Now check the other three axis’ of the right stick, checking the roll right, roll left, and backward/reverse flight. 

These stick movements do not require “big” moves. In our training, we teach pilots to keep the aircraft in a 24″ square box area. In keeping the flight area very small, the aircraft will not be able to build momentum in the event of any obstacle that might be in the area. Additionally, a small area reduces the load on the battery, ensuring the longest possible flight time. 


And that’s all! Once the controllability check is completed and all control surfaces have been verified, the planned missions may begin.


Some pilots (and our policy) is to land the aircraft after the controllability check, and insert a fresh battery prior to beginning any mission. Each organization/business/operation will have their own standards and best practices. If a new, fresh battery was used for the C/C, and the C/C is performed rapidly, there is generally no issue with beginning the mission while the aircraft is hovering after the final stick/control check. 


Use your own best judgement; the key is to identifying consistent behaviors, processes, and checks to ensure the aircraft is ready to be flown or put into automated mission modes. 


Thanks for reading!



At KukerRanken, we’re dedicated to providing the best UAS information for Architects, Surveyors, Construction companies, and Engineers.  When questions arise, we’re here to provide answers. 

3 Months FREE PROCESSING

THREE MONTHS FREE PROCESSING!

Purchase a Microdrones MD1000HR between today and May 27, 2022, and save nearly $4000.00 in processing costs.

When a Microdrones MD1000HR is purchased, customers typically purchase one year of limited access data processing. However, for this limited time, KR and Microdrones are offering three free months of unlimited processing, until May 27, 2022.
We have units available now, so no need to wait!

The MD1000HR flies over the Las Vegas Convention Center during upgrades to the parking lot areas, tram areas, and walkways to new West Halls (Spring, 2022)

We do have demo datasets available, and are ready to do an online demonstration of capture, ingest, data processing, deliverable to civil tools, 3DR, Pix4D, and other popular analysis and delivery applications.


KukerRanken also offers industry-focused training for Part 107, practical applications of drones/UAS in Survey, Engineering, or Construction workflows, including Leica software tools and Pix4D products.

Contact Brady, Douglas, Bryan, or Darrell for more information on this short-time opportunity from Microdrones and KukerRanken!

KR Blog!

  • The Year of LiDAR
    2022 is clearly the year of LiDAR.  At all of the UAS shows in the USA, Mexico, Canada, and EU, the hot topic is LiDAR in 2022, and 2023 is ramping up to be more of the same, with significant growth. LiDAR is a “LIght Detection And Ranging” sensor, utilizing a laser, position-controlled mirror, an IMU (Inertial Measurement Unit) and internal processing to record geolocation data.  A LiDAR sensor emits a… Read more: The Year of LiDAR
  • CONTROLLABILITY CHECKS REQUIRED FOR UAS FLIGHT
    Whether flying for the jobsite or flying for fun and family enjoyment, safety is always paramount with UAS (drones). There are the Federal Aviation Administration (FAA) regulations, also known as the FARs, and then there are the “Common-Sense” operational actions that all pilots, regardless of why the drone is flying, should be enacting prior to every flight. Pre-flight checks aren’t just good practice; they are required by federal law under Part… Read more: CONTROLLABILITY CHECKS REQUIRED FOR UAS FLIGHT
  • 3 Months FREE PROCESSING
    THREE MONTHS FREE PROCESSING! Purchase a Microdrones MD1000HR between today and May 27, 2022, and save nearly $4000.00 in processing costs. When a Microdrones MD1000HR is purchased, customers typically purchase one year of limited access data processing. However, for this limited time, KR and Microdrones are offering three free months of unlimited processing, until May 27, 2022.We have units available now, so no need to wait! The MD1000HR flies over the… Read more: 3 Months FREE PROCESSING
  • KR Blog!
  • RTK CAPABLE DRONES/UAS for SURVEY, CONSTRUCTION, ENGINEERING
    SAVE TIME, GENERATE GREATER REVENUE WITH AERIAL TECHNOLOGY Real-Time Kinematic (RTK) corrections bring significant precision to unmanned aircraft and workflows, even to the point of achieving repeatable precision within 1cm of actual position in a localized dataset. Adding RTK to an unmanned aircraft enables real-time correctional data to be sent to the aircraft, allowing the aircraft to write corrected information to the metadata captured in the aircraft’s camera/sensor system. This is… Read more: RTK CAPABLE DRONES/UAS for SURVEY, CONSTRUCTION, ENGINEERING
  • Viva Las Vegas (LiDAR Excitement)
    Commercial UAV Expo effervesces in LiDAR and Face to Face gathering DR. A. STEWART WALKER 09.22.2021 Diversified Communications is the very model of a modern conference company, but even its most experienced managers must have harbored slivers of doubt as they prepared the Commercial UAV Expo Americas, in the Mirage, Las Vegas, on 7-9 September. If we make it, will they come? Will they be so desirous of renewed face-to-face contact… Read more: Viva Las Vegas (LiDAR Excitement)
  • A Deep Insider’s Look at a Rugged Terrain Mission to Investigate a Helicopter Crash with Drones
    nce that may have not been seen for various reasons during a site walk-through.
  • LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones
    LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones It takes more than just a Part 107 to be a good drone service provider: customers require expertise in production, too.  If you’re confused about camera formats for drones and some of the newest options on the market, we’ve got you covered with this deep dive.  UAS expert and industry figure Douglas Spotted Eagle here at KukerRanken provides a detailed and expert… Read more: LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones
  • Does the Drone Industry Really Need 8K
    We have to roughly quadruplemegapixels to doubleresolution, so the jump from SD to HD makes sense, while the jump from HD to UHD/4K makes even more sense. Following that theme, jumping to 6K makes sense, while jumping to 8K is perfect theory, and nears the maximum of the human eye’s ability to resolve information.
  • Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation
    Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation. Here’s What Happened. At Commercial UAV Expo in Las Vegas, more than 300 drone industry professionals watched as experts tested four different drone mapping solutions for crime scene investigation at night. Guest post by Douglas Spotted Eagle, Chief Strategy Officer at KukerRanken Commercial UAV Expo brought UAS professionals, developers, manufacturers, first responders, and related industries under one roof for the first time… Read more: Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation
  • Selecting the Right Drone for Your Construction Business
    Unmanned Aircraft (UA/Drones) have rapidly become a significant component of the modern construction industry workflow whether it’s for progress reporting, site planning, BIM, inventory control, safety awareness, structure inspection, topo’s, or other purposes. Site supervisors, architects, and stakeholders all benefit from the rapid output of accurate 2D/Ortho, or 3D models that may be used for purposes ranging from simple visualizations, progress reporting, stockpile calculations, DSM, contours, to more complex overlaying blue-prints in the As-Designed/As-Built or BIM process.
  • Part 91, 101, 103, 105, 107, 137: WHAT’S THE DIFFERENCE?
    All these FARs, what’s a drone pilot to do in order to understand them? Do they matter? YES! In virtually every aviation pursuit except for sUAS, an understanding of regulations is requisite and part of most testing mechanisms.  As a result, many sUAS pilots holding  a Remote Pilot Certificate under Part §107 are woefully uninformed, to the detriment of the industry. Therefore, sUAS pilots would be well-served to inform themselves of… Read more: Part 91, 101, 103, 105, 107, 137: WHAT’S THE DIFFERENCE?
  • Six ways drones have proven themselves as a tool for the AEC, Surveying, and mapping industries.
    Drones and unmanned aircraft in AEC scanning and construction Six ways drones have proven themselves as a tool for the AEC, Surveying, and mapping industries Drones and unmanned aircraft in AEC scanning and construction process are becoming more common.  Unmanned aircraft, or drones are becoming much more common on today’s project sites. many companies in the AEC, Surveying and mapping industries are utilizing these aircraft daily. So how do drones capture… Read more: Six ways drones have proven themselves as a tool for the AEC, Surveying, and mapping industries.

RTK CAPABLE DRONES/UAS for SURVEY, CONSTRUCTION, ENGINEERING

SAVE TIME, GENERATE GREATER REVENUE WITH AERIAL TECHNOLOGY

The Autel EVO II Aircraft, coupled with the Emlid RS2 base station are a cost-effective combination for surveyors, construction survey, engineering, etc.

Real-Time Kinematic (RTK) corrections bring significant precision to unmanned aircraft and workflows, even to the point of achieving repeatable precision within 1cm of actual position in a localized dataset.

Adding RTK to an unmanned aircraft enables real-time correctional data to be sent to the aircraft, allowing the aircraft to write corrected information to the metadata captured in the aircraft’s camera/sensor system. This is achieved through the aircraft remote control/ground station controller receiving correctional information from either a network system (NTRIP) or a local base station sending correctional data to the controller. In either case, the corrected positional data is uploaded to the aircraft.

No corrections, no RTK, nearly 10′ from point

One of the greatest challenges/concerns with an RTK aircraft is to ensure the aircraft is receiving corrections throughout the entire flight. Over large areas where there may be pockets of RF interference from powerlines, trees, buildings, or other obstructions, it’s possible to have a few images without RTK corrections, particularly at the edges of the flight area. One remedy is to add a range extender such as the 4Hawks antenna system to the aircraft’s remote.

The 4Hawks Antenna system assists in ensuring RTK corrections are sent to the aircraft throughout the entire mission area.

RTK systems gain slight benefit from the antenna-only system (autonomous positioning) while adding a base station or NTRIP network correctional system.

No corrections, RTK antenna-only. 13″ from actual
With RTK corrections sent to the aircraft, the point and the aircraft projections are 1 cm from the point.

We used a DT Research 301 data collector with a Seco RTK rod and head to verify data from the aircraft’s indicated position, and have since used a Leica GS18i to verify the points.

Jeremy Kippen from the KukerRanken Las Vegas store uses the DTResearch 301 system as a data collector to capture points on the ground, while flying the Autel EVO II RTK aircraft

Incorporating an RTK aircraft into the construction site, survey, engineering project, and many other uses provides a safer, faster, cost-effective means of capturing precise data, no matter the scenario. Topos, DSM, DTM, DEM, orthos, pointclouds, extraction for surfaces, and many other deliverables can become significantly more efficient when proper training and implementation techniques are observed. At KukerRanken, we’re here to help with UAS program development, training from Part 107 to operational techniques, and post-processing best practices.

Contact one of our KukerRanken staff to gain access to datasets demonstrating RTK with drone workflows. We offer Pix4DMapper, Pix4DSurvey, Leica Infinity, Leica 3DR, and many other training programs.

Viva Las Vegas (LiDAR Excitement)

Commercial UAV Expo effervesces in LiDAR and Face to Face gathering

DR. A. STEWART WALKER 09.22.2021

Diversified Communications is the very model of a modern conference company, but even its most experienced managers must have harbored slivers of doubt as they prepared the Commercial UAV Expo Americas, in the Mirage, Las Vegas, on 7-9 September. If we make it, will they come? Will they be so desirous of renewed face-to-face contact that the conference is a sell-out, or will they be unsure, unvaccinated or unmasked, therefore unwilling to risk it?

It turned out to be the former. The event was a huge success and participants reveled in being back together. The raw numbers, provided by DivCom’s genial and extremely knowledgeable event strategist, Carl Berndtson, were: 2767 registrants from 61 nations and all 50 states, 130 exhibitors (the hall included a Korean pavilion for the first time), 12 product launches, 150 speakers and double the projected attendance in the conference sessions. Registration reached 88% of the 2019 figure, way higher than many other conferences that have happened in recent months.

The outdoor demos were a sell-out, with 300 attendees. We were bused to the site near Henderson, Nevada. The shadeless bleachers became brutal as the morning wore on and the temperature rose towards 35°C, but compere Douglas Spotted Eagle of Sundance Media Group and KukerRanken, a regular at these events, repeatedly enjoined attendees to partake of the water provided, so casualties were minimal. We saw UAVs flown in 20-minute slots by Skyfront, CommAris (a brand of Terrafugia, the flying car people), Doosan Mobility Innovation, Skydio, AEE Technology, Autel Robotics, and BRINC/Adorama Business Solutions. This immediately underlined a theme of the conference, the real and increasing role of UAVs in emergency management, first response, and search and rescue. Another very apparent feature was the number of large aircraft, always, however, below the 55 lb limit. Skyfront’s multi-rotor, long-endurance UAV, with a gas engine generating electricity to drive the propellers, can carry a YellowScan lidar, though the large number of available payloads have somewhat of a defense focus. Douglas Spotted Eagle conducted an unscripted show of hands of the audience to see who was using lidar: there were nine responses from the bleachers, five using DJI, one Microdrones and three YellowScan.

Next on was CommAris, the big Seeker long-endurance VTOL requiring two crew with full jumpsuits and helmets to get it airborne. Despite its size (15’ wingspan), the Seeker can be quickly assembled in the field and can carry a 10-lb payload. I spoke to the CommAris folk in the exhibition: so far they don’t have a lidar customer, but they expect several and will keep us informed. As it flew, we saw a hawk (the bird, not the name of another UAV) fly near to take a look, but it elected not to attack. Doosan’s UAV was carrying a USPS package. It uses a hydrogen cell to increase endurance and brought home to us, if we weren’t already believers, that the carrying of packages by UAVs is happening in many countries and is a reality, no longer confined to carrying anti-venom in the remote Amazon, but likely soon to be part of daily life for the non-fluvial Amazon, Walmart and others. Indeed, Walmart’s “director last mile” gave a keynote on Thursday.

IMG 0446 600x400

Doosan Mobility Innovation, indoors at Commercial UAV Expo 2021.

Act five was the public debut of the Mach 6 UAV from AEE, focused on public safety, with dual batteries to increase endurance. Though the payloads shown were not geospatial (thermal imagery of the audience; megaphone; delivery of automated external defibrillator through a partnership with Schiller Medical), the Mach 6 could easily carry lidar. It uses radar for collision avoidance and AEE has BVLOS very much in mind. AEE was followed by another new product, the Skydio X2. The firm has a defense focus and has won an AUVSI Excellence in Innovation award. The demo involved imagery and the creation of a mesh with photogrammetry – no mention of lidar. Like all the presenters, the Skydio team explained aspects of their software using both PowerPoint and live demos on the big screen, though bright sunlight and intense heat sapped audience concentration.

The Autel Dragonfish VTOL is not new, but has evolved considerably from earlier models. The aircraft has a two-hour endurance and offers a terrain-following feature. After the mission, it landed perfectly on the mat provided for it. The audience consisted on UAV veterans, yet this pinpoint ability – which all demos featured – never failed to raise some applause. The last performer was BRINC Drones, a Las Vegas company, assisted by Adorama. Its Lemur S, aimed at the public safety market, uses lidar to help navigation, but not for geospatial purposes. Nevertheless, this remarkable aircraft is worth a few words: if it crashes and tips over, it can get up, right itself and resume the mission; and it was demonstrated with an attachment that can break glass! The latter was used to enter a small hut on the demo site, and we were also shown the UAV flying thermal imagery inside a school bus. Truly remarkable!

IMG 0445 600x400

Commercial UAV Expo 2021 attendees

Thus ended a wonderful morning, impressing upon us the progress being made by UAVs to the extent that they are part of daily life. The efforts of the firms’ personnel, some of whom had been out to the site on one or two previous occasions to rehearse, deserved applause – they had suffered the torrid conditions in order to make our time as productive as possible.

Returning from the desert, we had to prepare quickly for the afternoon fare, the product preview presentations. As always, these were a bit of an endurance test, with 17 15-minute presentations in each of two rooms, necessitating agile jumping between the two in pursuit of the best mix. There were frustrations – ASPRS did not speak at the time advertised on the program, and RIEGL and LiDAR USA were on at the same time – yet the speakers managed to overcome the temptations to firehose us with small detail of their products and the afternoon passed quickly and informatively. There was too much here to report in a few words, but it’s worth following up the geospatial lidar players by looking at their websites for the latest developments, such as DJI, GeoCue, Leica Geosystems, LiDAR USA, RIEGL, SimActive and YellowScan, and imaging players such as AgEagle and Phase One.

After the demanding first day, the event resumed on the Wednesday with opening remarks from DivCom director Lee Corkhill. Top of the bill keynote was Stephen Dickson, FAA Administrator (ex USAF and Delta). He gave a fine presentation, but, as usually happens in these events, once its representative had departed, FAA was targeted with some less than complimentary remarks by later speakers, all of whom are anxious to fly BVLOS, or in the dark, or over populated areas, or all of the above, sooner rather than later. As Stephen said, lots of progress has been made on night flying, but for other desires waivers are the current way to go, so we’re at an inflection point as this process cannot be sustained at scale. Whatever other speakers may have felt, there is no doubt that FAA is busy and one cannot argue with Stephen’s closing remarks, “Safety is a journey, not a destination”, and his emphases on humility and safety.

There was too much in the show to spend time on every booth. Among highlights were, prominent on the DJI booth, an M600 with the Zenmuse L1 lidar sensor. Phase One had a new camera, the P3, complete with gimbal. Emesent had the Hovermap lidar sensor, complete with SLAM software, i.e. it also works in GNSS-denied environments, which can be mounted on UAVs, or land vehicles, or in a backpack of hand-carried – indeed, Emesent perhaps stole the show by showing it on the back of Spot, the robotic dog from Boston Dynamics (remember reading about it in LIDAR Magazine?[1]). Leica Geosystems introduced BLK2FLY, its gorgeous new laser scanner snuggling inside a quadcopter, also called an “autonomous flying laser scanner”. The software is richly furnished with SLAM as well as GNSS/IMU components. Some of the exhibitors gave additional presentations in the theater set up in the exhibition hall, so there was no excuse not to find out about products of interest.

LIDAR Magazine was invited to participate in the “Meet the Press” event. Of the 25 firms who entered, 16 sent representatives to the one-hour live event, at which they spoke for two minutes each, followed by one minute for questions. The speakers entered into the spirit of this “fun” session and fielded the questions both competently and with a sense of humor. The journalists went into purdah and chose Emesent, BRINC Drones and vHive as winners[2].

PXL 20210908 224737409.MP  600x400

Teledyne FLIR spokesperson shares a new sensor with the press.

Amongst all this, I attended as many sessions as possible. Jeremiah Karpowicz of DivCom conducted a good on-stage discussion with Brandon Torres Declet, the new CEO of AgEagle. DivCom favors a formula where part or all of almost every session is a panel discussion, with several experts on stage. Some of these were a tad thin or repetitive, but invariably attorneys, police officers and firefighters did well. These people may not be ideal to discern trends for us after a moment’s thought, but they are performing or managing UAV flights in the thousands every year, so listen up! We’ve moved from shiny toys to commercialization, rapidly and successfully. A session on construction revealed what could be done, for example measuring cranes, flare structures, Las Vegas’s Allegiant Stadium. UAV-delivery firms Zipline and DroneUp both made deep impressions.

Towards the end of the conference, our regular contributor Lewis Graham chaired a session on surveying and mapping. I felt on more familiar ground here as speakers from Ohio UAS Center and The Ohio State University talked about the multiplicity of projects they had completed, while sometimes humorously referring to some of the day-to-day problems they encountered. Also in this session was one of the firms from the Korean pavilion in the exhibition, describing UAV-based geomatics land data acquisition in Ethiopia, using a UAV built in Korea.

ASPRS ran four two-hour workshops, two on Wednesday and two on Thursday, one pair on UAV-photogrammetry and one on UAV-lidar. All were well attended, confirming the thirst for knowledge that was so obvious from the buzz and attendance at this event

The DivCom sales team was energetically working the halls and must have a good chance of following the sell-out exhibition with another one at the Geo Week conferences in Denver in February 2022, where ILMF, AEC Next, SPAR 3D, ASPRS and USIBD will be combined. Meanwhile, LIDAR Magazine was taking every opportunity to solicit firms for articles, as a result of which we are working with DJI, Emesent, LightWare, Phoenix LiDAR Systems, SimActive and several others.

DivCom has decided to change the venue of Commercial UAV Expo Americas and the next iteration of the event will be at Caesars Forum on 6-8 September 2022. LIDAR Magazine would have preferred a slot later in the year, so we would be less well done at the outdoor demos, but won’t hesitate to be there, to learn, enjoy, contribute and be thankful. While the scope of UAVs extends far beyond the geospatial, and the lidar vertical is but a small part of a market dominated by public safety and parcel deliveries, I wouldn’t willingly miss this event. The UAV world is fast moving and more than one company said it was in robotics rather than drones. DivCom’s formula of a technology-based rather than market-based event works and the wide range of backgrounds of attendees is a big plus. I felt almost overwhelmed by information, yet stimulated, inspired and anxious to reflect in order to get it all into perspective. Probably that’s what a successful conference event should inspire!

As I finished this report, HxGN LIVE GeoSummit and Intergeo, remote and hybrid respectively, were swinging into action. Once again, we’re in the thick of development and struggling to assimilate all the news and innovations. It’s a great time to be in lidar!

A Deep Insider’s Look at a Rugged Terrain Mission to Investigate a Helicopter Crash with Drones

A Deep Insider’s Look at a Rugged Terrain Mission to Investigate a Helicopter Crash with Drones

Crash site investigation with drones has emerged as a leading application for unmanned systems in public safety.  Gathering data that can be used by investigators in a courtroom, however, requires careful mission planning.  Here, sUAS expert and industry figure Douglas Spotted Eagle of  KukerRanken provides a detailed insider’s view of a helicopter crash site investigation.

Unmanned aircraft have become proven assets during investigations, offering not only the ability to reconstruct a scene. When a high ground sampling distance (GSD) is used, the data may be deeply examined, allowing investigators to find evidence that may have not been seen for various reasons during a site walk-through.

Recently, David Martel, Brady Reisch and I were called upon to assist in multiple investigations where debris was scattered over a large area, and investigators could not safely traverse the areas where high speed impacts may have spread evidence over large rocky, uneven areas. In this particular case, a EuroStar 350  aircraft may have experienced a cable wrap around the tail rotor and boom, potentially pulling the tail boom toward the nose of the aircraft, causing a high speed rotation of the hull prior to impact. Debris was spread over a relatively contained area, with some evidence unfound.

crash site investigation with drones

Per the FAA investigators;

“The helicopter was on its right side in mountainous densely forested desert terrain at an elevation of 6,741 ft mean sea level (MSL). The steel long line cable impacted the main rotor blades and was also entangled in the separated tail rotor. The tail rotor with one blade attached was 21 ft. from the main wreckage. Approximately 30 ft. of long line and one tail rotor blade were not located. The vertical stabilizer was 365 ft. from the main wreckage.”

With a missing tail rotor blade and the missing long line, unmanned aircraft were called in to provide a high resolution map of the rugged area/terrain, in hopes of locating the missing parts that may or may not aid in the crash investigation.

The terrain was difficult and unimproved, requiring four-wheel drive vehicles for access into the crash site. Due to rising terrain, we elected to launch/land the aircraft from the highest point relevant to the crash search area, which encompassed a total of approximately 70 acres.

Adding to the difficulty of finding missing parts was that the helicopter was partially covered in grey vinyl wrap, along with red and black vinyl wrap, having recently been wrapped for a trade show where the helicopter was displayed.

drones in crash site investigation

We arrived on scene armed with pre-loaded Google Earth overheads, and an idea of optimal locations to place seven Hoodman GCP discs, which would allow us to capture RTK points for accuracy, and Manual Tie Points once the images were loaded into Pix4D.  We pre-planned the flight for an extremely high ground sampling distance (GSD) average of .4cm per pixel. Due to the mountainous terrain, this GSD would vary from the top to the bottom of the site. We planned to capture the impact location at various GSD for best image evaluation, averaging as tight as .2cmppx. Some of these images would be discarded for the final output, and used only for purposes of investigation.

Although the overall GSD was greater than necessary, the goal is to be able to zoom in very deep on heavily covered areas with the ability to determine the difference between rocks and potential evidence, enabling investigators to view the overall scene via a 3.5 GB GeoTiff in Google Earth, and refer back to the Pix4DMapper project once rendered/assembled.

The same scene minus initial marker points.

Although working directly in Pix4D provides the best in-depth view of each individual photo, the Google Earth overlay/geotiff enables a reasonably deep examination.

Using two of the recently released Autel EVO II Pro aircraft, we planned the missions so that one aircraft would manage North/South corridors while the other captured East/West corridors.  Planning the mission in this manner allows for half the work time, while capturing the entire scene. This is the same method we used to capture the MGM festival grounds following the One October shooting in Las Vegas, Nevada. The primary difference is in the overall size, with the Pioche mission being nearly 70 acres, while the Las Vegas festival ground shooting area is under 20 acres in total.

Similar to the Las Vegas shooting scene, shadow distortion/scene corruption was a concern; flying two aircraft beginning at 11:00 a.m. and flying until 1:30 aided in avoiding issues with shadow.

Temporal and spatial offsets were employed to ensure that the EVO II Pro aircraft could not possibly collide, we set off at opposite sides of the area, at different points in time, with a few feet of vertical offset added in for an additional cushion of air between the EVO II. We programmed the missions to fly at a lower speed of 11 mph/16fps to ensure that the high GSD/low altitude images would be crisp and clean. It is possible to fly faster and complete the mission sooner, yet with the 3 hour travel time from Las Vegas to the crash site, we wanted to ensure everything was captured at its best possible resolution with no blur, streak, or otherwise challenged imagery. Overall, each aircraft emptied five batteries, with our batteries set to exchange notification at 30%.

Total mission running time was slightly over 2.5 hours per aircraft, with additional manual flight over the scene of impact requiring another 45 minutes of flight time to capture deep detail. We also captured imagery facing the telecommunications tower at the top of the mountain for line of sight reference, and images facing the last known landing area, again for visual reference to potential lines of sight.

crash site investigation with drones

By launching/landing from the highest point in the area to be mapped, we were able to avoid any signal loss across the heavily wooded area. To ensure VLOS was maintained at all times, FoxFury D3060’s were mounted and in strobing mode for both sets of missions (The FoxFury lighting kit is included with the Autel EVO II Pro and EVO II Dual Rugged Bundle kits).

Once an initial flight to check exposure/camera settings was performed, along with standard controllability checks and other pre-flight tasks, we sent the aircraft on their way.

Capturing over 6000 images, we checked image quality periodically to ensure consistency. Once the missions were complete, we drove to the site of impact to capture obliques of the specific area in order to create a more dense model/map of the actual impact site. We also manually flew a ravine running parallel to the point of impact to determine if any additional debris was found (we did find several small pieces of fuselage, tools assumed to be cast off at impact, and other debris.

The initial pointcloud took approximately 12 hours to render, generating a high-quality, highly dense initial cloud.

crash site investigation with drones

After laying in point controls, marking scale constraints as a check, and re-optimized the project in Pix4D, the second step was rendered to create the dense point cloud. We were stunned at the quality of the dense point cloud, given the large area.

The dense point cloud is ideal for purposes of measuring. Although this sort of site would typically benefit (visually) from texturing/placing the mesh, it was not necessary due to the high number of points and deep detail the combination of Pix4D and Autel EVO II Pro provided. This allowed us to select specific points where we believed points of evidence may be located, bringing up the high resolution images relevant to that area. Investigators were able to deep-dive into the area and locate small parts, none of which were relevant to better understanding the cause of the crash.

“The project generated 38,426,205 2D points and 13,712,897 3D points from a combination of nearly 7,000 images.”

crash site investigation with drones

Using this method of reviewing the site allows investigators to see more deeply, with ability to repeatedly examine areas, identify patterns from an overhead view, and safely search for additional evidence that may not be accessible by vehicle or foot. Literally every inch of the site may be gone over.

crash site investigation with drones

Further, using a variety of computer-aided search tools, investigators may plug in an application to search for specific color parameters. For example, much of the fuselage is red in color, allowing investigators to search for a specific range of red colors. Pieces of fuselage as small as 1” were discovered using this method. Bright white allowed for finding some items, while 0-16 level black allowed for finding other small objects such as stickers, toolbox, and oil cans.

Using a tool such as the DTResearch 301 to capture the RTK geolocation information, we also use the DTResearch ruggedized tablet as a localized pointcloud scan which may be tied into the Pix4Dmapper application. Capturing local scan data from a terrestrial perspective with GCP’s in the image allow for extremely deep detail in small environments. This is particularly valuable for construction sites or interior scans, along with uses for OIS, etc.

Primary Considerations When Capturing a Scene Twin

  • GSD.​ This is critical. There is a balance between altitude and propwash, with all necessary safety considerations.
    Vertical surfaces. In the event of an OIS where walls have been impacted, the ability to fly vertical surfaces and capture them with a consistent GSD will go a long way to creating a proper model. Shadow distortion.​ If the scene is very large, time will naturally fly by and so will the sun. In some conditions, it’s difficult to know the difference between burn marks and shadows. A bit of experience and experimentation will help manage this challenge.
  • Exposure.​ Checking exposure prior to the mission is very important, particularly if an application like Pix4Dreact isn’t available for rapid mapping to check the data on-site.
    Angle of sun/time of day​. Of course, accidents, incidents, crime, and other scenes happen when they happen. However, if the scene allows for capture in the midday hours, grab the opportunity and be grateful. This is specifically the reason that our team developed night-time CSI/Datacapture, now copied by several training organizations across the country over recent years.
  • Overcapture.​ Too much overlap is significantly preferable to undercapture. Ortho and modeling software love images.
  • Obliques. ​Capture obliques whenever possible. Regardless of intended use, capture the angular views of a scene. When possible, combine with ground-level terrestrial imaging. Sometimes this may be best accomplished by walking the scene perimeter with the UA, capturing as the aircraft is walked. We recommend removing props in these situations to ensure everyone’s safety.

What happens when these points are put aside?

This is a capture of a scene brought to us for “repair,” as the pilot didn’t know what he didn’t know. Although we were able to pull a bit of a scene, the overexposure, too-high altitude/low GSD, and lack of obliques made this scene significantly less valuable than it might have been.

Not understanding the proper role or application of the UA in the capture process, the UA pilot created a scene that is difficult to accurately measure, lacking appropriate detail, and the overexposure creates difficulties laying in the mesh. While this scene is somewhat preserved as a twin, there is much detail missing where the equipment had the necessary specifications and components to capture a terrific twin. Pilot error cannot be fixed. Operating on the “FORD” principle, understanding that ​FO​cus, exposu​R​e, and ​D​istance (GSD) cannot be rectified/compensated for in post processing means it has to be captured properly the first time. The above scene can’t be properly brought to life due to gross pilot error.

“ALWAYS PUT THE AIRCRAFT OVER THE PRIMARY SCENE LOCATION TO CONFIRM EXPOSURE SETTINGS, KEEPING ISO AS LOW AS POSSIBLE. USE ISO 50-100 IN MOST OUTDOOR SCENARIOS TO OBTAIN THE BEST IMAGE. NEVER USE OVERSATURATED PHOTO SETTINGS OR LOG FORMATS FOR MAPPING.”

Ultimately, the primary responsibility is to go beyond a digital twin of the scene, but instead offer deep value to the investigator(s) which may enhance or accelerate their investigations. Regardless of whether it’s a crash scene, insurance capture, energy audit, or other mapping activity, understanding how to set up the mission, fly, process, and export the mission is paramount.

Capturing these sorts of scenes are not for the average run n’ gun 107 certificate holder. Although newer pilots may feel they are all things to all endeavors benefitting from UA, planning, strategy, and experience all play a role in ensuring qualified and quality captures occur. Pilots wanting to get into mapping should find themselves practicing with photogrammetry tools and flying the most challenging environments they can find in order to be best prepared for environmental, temporal, and spatial challenges that may accompany an accident scene. Discovery breeds experience when it’s cold and batteries expire faster, satellite challenges in an RTK or PPK environment, planning for overheated tablets/devices, managing long flight times on multi-battery missions, or when winds force a crabbing mission vs a head/tailwind mission. Learning to maintain GSD in wild terrain, or conducting operations amidst outside forces that influence the success or failure of a mission only comes through practice over time. Having a solid, tried and true risk mitigation/SMS program is crucial to success.

We were pleased to close out this highly successful mission, and be capable of delivering a 3.5 GB geotiff for overlay on Google Earth, while also being able to export the project for investigators to view at actual ground height, saving time, providing a safety net in rugged terrain, and a digital record/twin of the crash scene that may be used until the accident investigation is closed.

EQUIPMENT USED

●  2X Autel EVOII™ Pro aircraft

●  Autel Mission Planner software

●  FoxFury D3060 lighting

●  DTResearch 301 RTK tablet

●  Seko field mast/legs

●  Seko RTK antenna

●  Hoodman GCP

●  Hoodman Hoods

●  Manfrotto Tripod

●  Dot3D Windows 10 software

●  Pix4DMapper software

●  Luminar 4 software

LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones

LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones

It takes more than just a Part 107 to be a good drone service provider: customers require expertise in production, too.  If you’re confused about camera formats for drones and some of the newest options on the market, we’ve got you covered with this deep dive.  UAS expert and industry figure Douglas Spotted Eagle here at KukerRanken provides a detailed and expert explanation of 10 bit file formats, LOG formats and Look-Up Tables (LUTS.).

camera formats for drones
camera formats for drones

Camera Formats for Drones: LUTS, LOG, 10Bit

The latest trend in UA camera systems is an ability to record in 10bit file formats, as well as in LOG formats, which allow the use of Look-Up Tables (LUTs, often pronounced “Loots” and more generally pronounced “luhts”), providing significantly greater dynamic range. This greater dynamic range enables color matching, greater opportunity to color correct with higher quality formats such as those offered by Sony, Blackmagic Design, Canon, and others. Video does not have to be recorded in LOG to use a LUT, but using an input LUT on a semi-saturated frames could generate undesirable results.

WHAT IS A LOOKUP TABLE?

A LookUp Table (LUT) is an array of data fields which make computational processes lighter, faster moving, and more efficient. They are used in a number of industries where intense data can be crunched more efficiently based on known factors.  For purposes of video, a LUT enables the camera to record in a LOG format, enabling deep color correction, compositing, and matching. There are many LUTs available, some are camera-specific while others are “general” LUTs that may be applied per user preference. A LUT might more easily be called  a “preset color adjustment” which we can apply to our video at the click of a button. Traditionally, LUTs have been used to map one color space to another. These color “presets” are applied after the video has been recorded, applied in a non-linear editing system such as Adobe Premiere, Blackmagic Resolve, Magix Vegas, etc.

Two types of LUTs are part of production workflows;

  • Input LUTs provide stylings for how the recorded video will look in a native format (once applied to specific clips). Input LUTs are generally applied only to clips/events captured by a specific camera. Input LUTs are applied prior to the correction/grading process.
  • Output/”Look LUTs” are used on an overall production to bring a consistent feel across all clips. “Look” LUTs should not be applied to clips or project until after all color correction/grading has been completed.

There are literally thousands of pre-built LUTs for virtually any scene or situation. Many are free, some (particularly those designed by well-known directors or color graders) are available for purchase.

In addition to Input/Look LUTs, there are also 1D and 3D LUTs. 1D LUTs simply apply a value, with no pixel interaction. 1D LUTs are a sort of fixed template of locked values.  3D LUTs map the XYZ data independently, so greater precision without affecting other tints are possible. For a deeper read on 3D vs 1D LUTs, here is a great article written by Jason Ritson.

HOW DOES A LUT WORK?

This is a fairly complex question that could occupy deep brain cycle time. In an attempt to simplify the discussion, let’s begin with the native image.

Natively, the camera applies color and exposure based on the sensor, light, and a variety of calculations. This is the standard output of the majority of off-the-shelf UA systems.  This type of output is ideal for video that is not going to be heavily corrected, will be seen on the typical computer monitor or television display, and time is of the essence. These are referred to as “Linear Picture Profiles” and is the most common means of recording video. Cell phones, webcams, and most video cameras record in linear format. While this is most common, it also reduces flexibility in post due to dynamic range, bit-depth, and other factors. In linear profiles, the typical 8bit video has 256 values of color, represented equally across the color spectrum. The balance (equality) of pixel ranges does not allow the most effective use of dynamic range, limiting correction options in post. With linear picture profiles, bottom end/dark colors use equal energy as brighter exposures, so while color balance is “equal,” it is inefficient and does not allow the greatest range to be displayed for the human eye, specifically midtones and highlights.

To take the best advantage of a LUT, video should be recorded in a LOG format. It’s rare an editor would apply a LUT to linear video, as in most cases, color space, exposure, and dynamic range would be unrealistic and challenged.

LOG formats store data differently than linear picture profiles.

In the past, LOG formats were only accessible in very high-end camera systems such as the Sony CineAlta Series, and similar, high-dollar products. Today, many UA cameras offer access to LOG formats.

When the camera recording parameters are set to LOG (Logarithmic) mode, these native color assignments are stripped away, and the camera records data in a non-linear algorithm.

LOG applies a curve in the bottom end of the image (darks/blacks) and shifts the available space to the more visible areas of the color range where it is more efficient/effective.

Of course, this will create the illusion of a low-contrast, washed-out image during the recording process.

camera formats for drones
A-LOG image direct from camera LUT applied, no additional grading

“A-LOG” is Autel Robotics logarithmic recording format. A-LOG not only allows for greater dynamic range, it is also a 10-bit format, enabling much greater chroma push when color correcting or grading. Most manufacturers offer their own LOG formats.

Graded imageSplit screen comparing LUT, grade, and source image

Virtually every editing application today features the ability to import LUTs to be used with 8- 10-, or 12-bit LOG files. Autel and DJI are the two predominant UA manufacturers offering LOG capability to their users, and any medium lift UA that can mount a Sony, Canon, Blackmagic Design camera also would benefit from shooting in a LOG format.

Magix Vegas allows import of LUTs, and offers the ability to save a color correction profile as a LUT to be re-used, or shared with others.

SHOULD I BE SHOOTING IN LOG-FORMAT?

Shooting video in LOG is much like choosing between .DNG or .JPG. If images/video are to be quickly turned around, and time, efficiency, or client desires are more important than taking the time to grade or correct images, there is no benefit to shooting video in LOG formats.  LOG, like DNG (or RAW) will simply slow the process. However, if images or video are to be matched to other camera systems, or when quality is more important than the time required to achieve the best possible image, then LOG should be used, to future-proof and preserve the best original integrity of archived footage. Of particular consideration are higher resolution formats, such as 2.7K, 4K, 6K, or 8K video. These formats will be most likely downsampled to HD for most delivery in the near future, and shooting in LOG allows for greatest re-use, or color preservation when down sampling. Without getting too deeply into the math of it, the 4:2:0 colorspace captured in a 4K or higher format will be 4:2:2 color sampled when delivered in HD, enabling significant opportunity in post-processing.

“Which is more committed to breakfast, the chicken or the pig? Baking colors into recorded video is limiting, and if appropriate, should be avoided.”

CAN I SEE THE FINAL OUTPUT BEFORE COMMITTING TO LOG?

Many monitors today allow for input LUT preview prior to, and during the recording process. This will require some sort of HDMI (or SDI) output from the UA camera into the monitor.

Input LUTs can be applied temporarily for monitoring only, or they can be burned into files for use in editing when capturing Blackmagic RAW (when using BMD products). Bear in mind, once a LUT has been “baked in” to the recorded video, it cannot be removed and is very difficult to compensate.

camera formats for drones

Most external monitors allow shooters/pilots to store LUTs on the monitor, which offers confidence when matching a UA camera to a terrestrial or other camera. In our case, matching a Sony A7III to the UA is relatively easy as we’re able to create camera settings on either camera while viewing through the LUT-enabled monitor, seeing the same LUT applied (virtually) to both cameras.

Another method of seeing how a LUT will affect video can be found in LATTICE, a high-end, professional director and editors tool.

WHICH CONTAINER FORMAT?

Most UA cameras today are capable of recording in h.264 (AVC) or h.265 (HEVC). Any resolution beyond HD should be always captured in h.265. H.264/AVC was never intended to be a recording format for resolutions higher than HD, although some folks are happy recording 2.7K or 4K in the lesser efficient format.  It’s correct to say that HEVC/h.265 is more CPU-intensive, and there are still a few editing and playback software apps that either cannot efficiently decode HEVC. However, the difference in file size/payload is significant, and quality of image is far superior in the HEVC format. A video created using HEVC at a lower bit rate has the same image quality as an h.264 video at a higher bit rate.

More importantly, AVC/h.264 does not support 10-bit video, whereas HEVC does, so not only are we capturing a higher quality image in a smaller payload, we’re also able to access 10bit, which can make a significant difference when correcting lower quality imagery, particularly in those pesky low light scenarios often found close to sunset, or night-flight images that may contain noise. Additionally, 10-bit in LOG format allows photographers to use a lower ISO in many situations, reducing overall noise in the low-light image. Last but not least, AVC does not support HDR, whereas HEVC does. Each camera is a bit different, and each shooting scenario is different, so a bit of testing, evaluation, and experience enables pilots to make informed decisions based on client or creative direction or needs.

HEVC IS SLOW ON MY COMPUTER!

Slower, older computers will struggle with the highly compressed HEVC format, particularly when 10-bit formats are used. Recent computer builds can decode HEVC on the graphics card, and newer CPUs have built-in decoding. However, not everyone has faster, newer computer systems available. This doesn’t mean that older computers cannot be used for HEVC source video.

Most editing systems allow for “proxy editing” where a proxy of the original file is created in a lower resolution and lower compression format. Proxy editing is a great way to cull footage to desired clips/edits. However, proxies cannot be color corrected with accurate results. The proxies will need to be replaced with source once edits are selected. In our house, we add transitions and other elements prior to compositing or color correcting. LUTs are added in the color correction stage.

camera formats for drones
10-bit A-LOG, no LUT applied10-bit A-LOG, LUT-only applied
10-bit A-LOG, LUT applied, Graded10-bit A-LOG, LUT applied, Graded Split Screen

The “Transcend” video displayed above is a mix of high end, DLSR, and UA cameras, captured prior to sunrise. Using 10-bit A-Log in h.265 format allowed a noise-free capture. The video received a Billboard Music award. We are grateful to Taras Shevchenko & Gianni Howell for allowing us to share the pre-edit images of this production.

WRAP UP

Shooting LOG isn’t viable for every production, particularly low-paying jobs, jobs that require rapid turnaround, or in situations where the matched cameras offer equal or lesser quality image output. However, if artistic flexibility is desired, or matching high end, cinema-focused cameras, or when handing footage over to an editor who requires flexibility for their needs, then HEVC in 10-bit, LOG-mode is the right choice.

In any event, expanding production knowledge and ability benefits virtually any workflow and professional organization, and is a powerful step to improving final deliverables.

Does the Drone Industry Really Need 8K

Does the Drone Industry Really Need 8K?

Pro Read: As a leak indicates that Autel Robotics may be the first to offer a 6/8K camera on a drone, UAS expert and industry leader Douglas Spotted Eagle dives in to what the advantages of 8k may be – and if the drone industry is ready to take advantage of them.

Guest post by Douglas Spotted Eagle, Chief Strategy Officer – KukerRanken

In 2004, Sony released the world’s first low-cost HD camera, known as the HVR-Z1U. The camera featured a standard 1/3” imager, squeezing 1440×1080 pixels (anamorphic/non-square) pixels on to the sensor. This was also the world’s first pro-sumer camera using the MPEG2 compression scheme, with a color sample of 4:2:0, and using a GOP method of frame progression, this new technology set the stage for much higher resolutions and eventually, greater frame rates.

It’s “father,” was the CineAlta HDWF900, which offered three 2/3” CCDs, which was the industry standard for filmmaking for several years, capturing big hits such as the “Star Wars Prequel Trilogy”, “Once Upon a Time in Mexico”, “Real Steel”, “Tomorrowland”, “Avatar”, “Spykids” (1 & 2), and so many others.  The newer HDV format spawned from similar technology found in the HDWF900, and set the stage for extremely high end camera tech to trickle down into the pro-sumer space.

Overtime, camera engineers identified methods of co-siting more pixels on small imagers, binning pixels, or using other techniques to increase the capture resolution on small surfaces. Compression engineers have developed new compression schemes which brought forward AVC (h.263), MP4(h.264), and now HEVC/High Efficiency Video Codec(h.265), and still others soon to be revealed.

Which brings us to the present.

We have to roughly quadruplemegapixels to doubleresolution, so the jump from SD to HD makes sense, while the jump from HD to UHD/4K makes even more sense. Following that theme, jumping to 6K makes sense, while jumping to 8K is perfect theory, and nears the maximum of the human eye’s ability to resolve information.

At NAB 2018, Sony and Blackmagic Design both revealed 8K cameras and in that time frame others have followed suit.

During CommUAV and InterDrone, several folks asked for my opinion on 6 and 8K resolutions. Nearly all were shocked as I expressed enthusiasm for the format.

–          “It’s impossible to edit.”

–          “The files are huge.”

–          “No computer can manage it.”

–          “There is no where to show 8K footage.”

–          “Human eyes can’t resolve that resolution unless sitting very far away from the screen.”

–          “Data cards aren’t fast enough.”

And….so on.

These are all the same comments heard as we predicted the tempo of the camera industry transitioning from SD to HD, and from HD to 4K.  In other words, we’ve been here before.

Video cameras are acquisition devices. For the same reasons major motion pictures are acquired at the highest possible resolutions, and for the same reasons photographers get very excited as resolutions on-camera increase, so should UAS photographers. Greater resolution doesn’t always mean higher grade images. Nor does larger sensor sizes increase quality of images. On the whole, higher resolution systems usually does translate into higher quality images.

Sensor sizes are somewhat important to this discussion, yet not entirely critical. The camera industry has been packing more and more pixels into the same physical space for nearly two decades, without the feared increase in noise. Additionally, better noise-sampling/reduction algorithms, particularly from OEM’s like Sony and Ambarella, have allowed far greater reduction in noise compared to the past. Cameras such as the Sony A7RIV and earlier offer nearly noise-free ISO of 32,000!

Sensor sizes vary of course, but we’ll find most UAS utilize the 1/2.3, or the 1” sensor. (Light Blue and Turquoise sizes respectively, as seen below).

“Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust, pitting, spalling, or other damage which might be missed with lower resolutions or the human eye become apparent with a greater resolution.”

WHY DOES HIGHER RESOLUTION TRANSLATE TO BETTER FINISHED PRODUCT?

Generally, we’re downsampling video or photos to smaller delivery vehicles, for but one reason. In broadcast, 4:2:2 uncompressed color schemes were the grail. Yet, most UAS cameras capture a 4:2:0 color/chroma sample.  However, a 4K capture, downsampled to 1080 at delivery, offers videographers the same “grail” color schema of 4:2:2!

As we move into 6 or 8K, similar results occur. 8K downconverted to HD offers a 4:4:4 color sample.

CROPPING

We gain the ability to crop for post editing/delivery to recompose images without fear of losing resolution. This means that although the aircraft may shoot a wide shot, the image may be recomposed to a tighter image in post, so long as the delivery is smaller than the source/acquisition capture. For example, shooting 4K for 1080 delivery means that up to 75% of the image may be cropped without resolution loss.

As the image above demonstrates, it’s quite possible to edit 8K HEVC streams on a newer laptop. Performance is not optimal without a great deal of RAM and a good video card, as HEVC requires a fair amount of horsepower to decode. The greater point, is that we can edit images with deep recomposition. Moreover, we have more pixels to work with, providing greater color correction, color timing, and depth/saturation.

For public safety, this is priceless. An 8K capture provides great ability to zoom/crop deeply into a scene and deliver much greater detail in HD or 4K delivery.

The same can be said for inspections, construction progress reports, etc. Users can capture at a high resolution and deliver in a lower resolution.

Another benefit of 6 and 8K resolutions is the increase in dynamic range. While small sensors only provide a small increase in dynamic range, a small increase is preferable to no increase.

To address other statements about 6K and 8K resolutions; They human eye has the ability to see around 40megapixels, age-dependent. 8K is approximately 33megapixels. However, the human eye doesn’t see equal resolutions across the surface. The center of our eye sees approximately 8megapixels, where the outer edges are not as deep. High resolution does provide greater smoothing across the spectrum, therefore our eyes see smoother moving pictures.

BEYOND THE HUMAN EYE

Going well-beyond the human eye, higher resolutions are applicable to “computer vision,” benefiting mapping, 3D modeling, and other similar applications. Generally speaking, more pixels equals greater smoothness and geometry. As technology moves deeper into Artificial Intelligence, higher resolutions with more efficient codecs become yet even more important. Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust or other damage which might be missed with lower resolutions or the human eye become more visible with a greater resolution. Now imagine that greater resolution providing input to an AI-aided inspection report that might notify the operator or manager of any problem. Our technology is moving beyond the resolution of the human eye for good reason.

DATA STORAGE

Files from a 6 or 8K camera are relatively small, particularly when compared to uncompressed 8K content (9.62TB per hour). Compression formats, known as “Codecs” have been improving for years, steadily moving forward. For example, when compressions first debuted in physical form, we saw Hollywood movies delivered on DVD. Then we saw HD delivered on Blu-ray. Delivery over disc formats is dead, and now we’ve moved through MPG2, AVC, AVCHD, H.264, and now H.265/HEVC. In the near future we’ll see yet even more compression schemes benefitting our workflows whether delivered via streaming or thumbdrive.  VVC or “Versatile Video Codec”will be the next big thing in codecs for 8K, scheduled to launch early 2022.

Unconventional h.264 and H.265/HEVCare currently being used as delivery codecs for compressed 6 and 8K streams. 8K has been successfully broadcast (in testing environments) at rates as low as 35Mbps for VOD, while NHK has set the standard at 100Mbps for conventional delivery. Using these codecs, downconverting streams to view OTA/Over The Air to tablets, smartphones, or ground station controllers is already possible. It’s unlikely we’ll see 8K streaming from the UAS to the GSC.

U3 Datacards are certainly prepared for 6 and 8K resolutions/datastreams; compression is what makes this possible.  The KenDao 8K and Insta 8K 360 cameras both are recording to U3 cards, available in the market today.

It will be some time before the average consumer will be seeing 8K on screens in their homes. However, 8K delivered for advertising, matching large format footage being shot on Weapon, Monstro, Helium or other camera formats may be less time-consuming when using 8K, even from smaller camera formats carried on an UAS (these cameras may easily be carried on heavy-lift UAS).

Professional UAS pilots will benefit greatly from 5, 6, or 8K cameras, and should not be shy about testing the format. Yes, it’s yet another paradigm shift in an always-fluid era of aerial and visual technology.  There can be no doubt that these higher resolutions provide higher quality in any final product.  Be prepared; 2020 is the year of 5, 6, and 8K cameras on the flying tripods we’re using for our professional and personal endeavors, and I for one, am looking forward to it with great enthusiasm.

Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation

Experts Tested 4 Different Drone Mapping Solutions for Crime Scene Investigation. Here’s What Happened.

At Commercial UAV Expo in Las Vegas, more than 300 drone industry professionals watched as experts tested four different drone mapping solutions for crime scene investigation at night.

Guest post by Douglas Spotted Eagle, Chief Strategy Officer at KukerRanken

Commercial UAV Expo brought UAS professionals, developers, manufacturers, first responders, and related industries under one roof for the first time in nearly two years. Due to the pandemic, the show was less attended than previous years, yet provided robust live demonstrations, night flight, daytime seminars, panels, and case studies for the relatively large audience. There was a strong buzz amongst the crowd about being at an in-person event, and experiencing face to face communication for the first time in many months.

In addition to the “Beyond the Cage” Live Drone Demo Day that launched Commercial UAV 2021, produced by Sundance Media Group, Wednesday night provided attendees with a glimpse of the Crime Scene Investigator tools function in the dark hours. Sundance Media Group developed this methodology several years ago at the request of a law enforcement agency and has been presenting this methodology at academies, colleges, universities, and tradeshows since 2017, with a variety of aircraft including DJI Mavic, Phantom 4, Yuneec H520, Skydio, and Autel EVO series (versions 1 and 2). All successfully output data, excepting Skydio, which struggles with brightly lit events in surrounding darkness.

Presented by FoxFury, Sundance Media Group, Autel, and Pix4D, this event also invited SkyeBrowse to participate in the demonstration, showing the effectiveness and speed of their application.

Testing Drone Mapping Solutions for Crime Scene Investigation: Setting the Scene

With a model covered in moulage, mock slit throat, and blood trail on the ground, the demonstration began with the multi-vendor team led by Brady Reisch, Bryan Worthen of Kuker-Ranken, Todd Henderson and Patrick Harris of SMG,  and David Martel.  The team  placed four FoxFury T56 lighting systems at specific, measured points in the scene, supplemented by FoxFury NOW  lanterns and Rugo lighting to fill in holes and eliminate shadows.

Douglas Spotted Eagle of SMG and KukerRanken emcee’d the event through the two flights.

Douglas Spotted Eagle addresses the crowd of 300 persons

SkyeBrowse had the first flight, with its one-button capture. Brady Reisch set up the mission, with input from the SkyeBrowse developer instructing the exposure levels of the camera for the SkyeBrowse video mission. Once the mission was completed, the photos were uploaded to the SkyeBrowse website, where results were found approximately 30 minutes following the flight.

Brady Reisch of KukerRanken sets up the Skybrowse mission with Bobby Ouyang of Skybrowse

The Autel EVO II Pro was programmed on-site for an automated Skybrowse mission and the demonstration began. The area is highly congested with palm trees and buildings enclosing the small rotunda in front of the Mirage Hotel Convention Center.

Brady Reisch flew the second EVO II  mission manually, in much the same configuration as though the aircraft had flown a double-grid mission, supplemented by high-altitude orbit, coupled with manually captured orbit and select placements. Because of the crowd, time was a consideration. In an actual homicide scene, more low-placed images would have been captured.

Brady Reisch monitors time as Pix4DReact rapid-renders the scene (60 seconds)

The mission photos were uploaded to Pix4dReact on-scene and rendered while the audience observed, requiring approximately 60 seconds to output an ortho-rectified, 2D image, complete with evidence markers/tags, and PDF supplemental report output. Also loaded were the photo images into Pix4D and Leica Infinity software packages, to be rendered for 3D viewing once the show floor opened on Thursday. Pix4DReact is a two-dimensional, rapid-mapping solution, so there is no 3D view.

The four screen captures tell the rest of the story, and readers can determine for themselves what each software is capable of providing.  One point of interest is that there were many claims of “guaranteed 1cm of precision regardless of flight area,” which has yet to be verified. The Kuker-Ranken team will be re-flying a mission with two separate GPS systems (Leica and Emlid) to verify the claims of precision.

Precision is Repeatable

Precision is repeatable. Accuracy is the degree of closeness to true value. Precision is the degree to which an instrument or process will repeat the same value. In other words, accuracy is the degree of veracity while precision is the degree of reproducibility. With a base station, NTRIP, Spydernet, PPK, or RTK workflow, precision is always the goal, well-beyond accuracy. This is a relatively new discussion in the use of unmanned aircraft, and although the topic seems simple enough, complexity holds challenges not easily dismissed by inexperience or lacking education and practice.  We are fortunate to have a partner in Kuker-Ranken, providing precision tools to the survey, forensic, civil engineering, and AEC industries since 1928. The KR team includes PLS’, EIT, and other accredited precision professionals, rarely found in the UAS industry.

Precision is critical for surveyors, civil engineers, forensic analysts and investigators, construction sites, mapping, agriculture, and other verticals in the UAS industry, and this sort of scene is no exception. Being able to properly place a map or model into a coordinate is necessary for many professional pilots in the UAV field, and while this mission is not precise to coordinate, it is precise within itself, or in other words, measurements will be accurate in the image, while being imprecise to the overall location.

We’ll dive more deeply into precision in a future article. For purposes of this exercise, we’re more interested in accuracy of content in the scene, and all four outputs were similar in accuracy within the scene itself. In other words, distances, volumes, and angles may be measured point to point. Pix4DReact is not as accurate as the other three tools, as it’s not intended to be a deeply accurate application given speed of output.

Output Results of Drone Mapping Solutions

Output #1: SkyeBrowse (processing time, approximately 35 minutes)

Output #2: Pix4Dreact (processing time, approximately 1 minute)

drone mapping solution Pix4Dreact

Output #3: Pix4Dmapper (processing time, approximately 2.5 hours)

drone mapping solutions Pix4Dmapper

Output #4: Leica Infinity (processing time, approximately 2 hours, 50 minutes)

drone mapping solutions Leica Infinity

Agencies who would like access to this data are invited to contact Brady Reisch, VDC Specialist at Kuker-Ranken.