A Deep Insider’s Look at a Rugged Terrain Mission to Investigate a Helicopter Crash with Drones

A Deep Insider’s Look at a Rugged Terrain Mission to Investigate a Helicopter Crash with Drones

Crash site investigation with drones has emerged as a leading application for unmanned systems in public safety.  Gathering data that can be used by investigators in a courtroom, however, requires careful mission planning.  Here, sUAS expert and industry figure Douglas Spotted Eagle of  KukerRanken provides a detailed insider’s view of a helicopter crash site investigation.

Unmanned aircraft have become proven assets during investigations, offering not only the ability to reconstruct a scene. When a high ground sampling distance (GSD) is used, the data may be deeply examined, allowing investigators to find evidence that may have not been seen for various reasons during a site walk-through.

Recently, David Martel, Brady Reisch and I were called upon to assist in multiple investigations where debris was scattered over a large area, and investigators could not safely traverse the areas where high speed impacts may have spread evidence over large rocky, uneven areas. In this particular case, a EuroStar 350  aircraft may have experienced a cable wrap around the tail rotor and boom, potentially pulling the tail boom toward the nose of the aircraft, causing a high speed rotation of the hull prior to impact. Debris was spread over a relatively contained area, with some evidence unfound.

crash site investigation with drones

Per the FAA investigators;

“The helicopter was on its right side in mountainous densely forested desert terrain at an elevation of 6,741 ft mean sea level (MSL). The steel long line cable impacted the main rotor blades and was also entangled in the separated tail rotor. The tail rotor with one blade attached was 21 ft. from the main wreckage. Approximately 30 ft. of long line and one tail rotor blade were not located. The vertical stabilizer was 365 ft. from the main wreckage.”

With a missing tail rotor blade and the missing long line, unmanned aircraft were called in to provide a high resolution map of the rugged area/terrain, in hopes of locating the missing parts that may or may not aid in the crash investigation.

The terrain was difficult and unimproved, requiring four-wheel drive vehicles for access into the crash site. Due to rising terrain, we elected to launch/land the aircraft from the highest point relevant to the crash search area, which encompassed a total of approximately 70 acres.

Adding to the difficulty of finding missing parts was that the helicopter was partially covered in grey vinyl wrap, along with red and black vinyl wrap, having recently been wrapped for a trade show where the helicopter was displayed.

drones in crash site investigation

We arrived on scene armed with pre-loaded Google Earth overheads, and an idea of optimal locations to place seven Hoodman GCP discs, which would allow us to capture RTK points for accuracy, and Manual Tie Points once the images were loaded into Pix4D.  We pre-planned the flight for an extremely high ground sampling distance (GSD) average of .4cm per pixel. Due to the mountainous terrain, this GSD would vary from the top to the bottom of the site. We planned to capture the impact location at various GSD for best image evaluation, averaging as tight as .2cmppx. Some of these images would be discarded for the final output, and used only for purposes of investigation.

Although the overall GSD was greater than necessary, the goal is to be able to zoom in very deep on heavily covered areas with the ability to determine the difference between rocks and potential evidence, enabling investigators to view the overall scene via a 3.5 GB GeoTiff in Google Earth, and refer back to the Pix4DMapper project once rendered/assembled.

The same scene minus initial marker points.

Although working directly in Pix4D provides the best in-depth view of each individual photo, the Google Earth overlay/geotiff enables a reasonably deep examination.

Using two of the recently released Autel EVO II Pro aircraft, we planned the missions so that one aircraft would manage North/South corridors while the other captured East/West corridors.  Planning the mission in this manner allows for half the work time, while capturing the entire scene. This is the same method we used to capture the MGM festival grounds following the One October shooting in Las Vegas, Nevada. The primary difference is in the overall size, with the Pioche mission being nearly 70 acres, while the Las Vegas festival ground shooting area is under 20 acres in total.

Similar to the Las Vegas shooting scene, shadow distortion/scene corruption was a concern; flying two aircraft beginning at 11:00 a.m. and flying until 1:30 aided in avoiding issues with shadow.

Temporal and spatial offsets were employed to ensure that the EVO II Pro aircraft could not possibly collide, we set off at opposite sides of the area, at different points in time, with a few feet of vertical offset added in for an additional cushion of air between the EVO II. We programmed the missions to fly at a lower speed of 11 mph/16fps to ensure that the high GSD/low altitude images would be crisp and clean. It is possible to fly faster and complete the mission sooner, yet with the 3 hour travel time from Las Vegas to the crash site, we wanted to ensure everything was captured at its best possible resolution with no blur, streak, or otherwise challenged imagery. Overall, each aircraft emptied five batteries, with our batteries set to exchange notification at 30%.

Total mission running time was slightly over 2.5 hours per aircraft, with additional manual flight over the scene of impact requiring another 45 minutes of flight time to capture deep detail. We also captured imagery facing the telecommunications tower at the top of the mountain for line of sight reference, and images facing the last known landing area, again for visual reference to potential lines of sight.

crash site investigation with drones

By launching/landing from the highest point in the area to be mapped, we were able to avoid any signal loss across the heavily wooded area. To ensure VLOS was maintained at all times, FoxFury D3060’s were mounted and in strobing mode for both sets of missions (The FoxFury lighting kit is included with the Autel EVO II Pro and EVO II Dual Rugged Bundle kits).

Once an initial flight to check exposure/camera settings was performed, along with standard controllability checks and other pre-flight tasks, we sent the aircraft on their way.

Capturing over 6000 images, we checked image quality periodically to ensure consistency. Once the missions were complete, we drove to the site of impact to capture obliques of the specific area in order to create a more dense model/map of the actual impact site. We also manually flew a ravine running parallel to the point of impact to determine if any additional debris was found (we did find several small pieces of fuselage, tools assumed to be cast off at impact, and other debris.

The initial pointcloud took approximately 12 hours to render, generating a high-quality, highly dense initial cloud.

crash site investigation with drones

After laying in point controls, marking scale constraints as a check, and re-optimized the project in Pix4D, the second step was rendered to create the dense point cloud. We were stunned at the quality of the dense point cloud, given the large area.

The dense point cloud is ideal for purposes of measuring. Although this sort of site would typically benefit (visually) from texturing/placing the mesh, it was not necessary due to the high number of points and deep detail the combination of Pix4D and Autel EVO II Pro provided. This allowed us to select specific points where we believed points of evidence may be located, bringing up the high resolution images relevant to that area. Investigators were able to deep-dive into the area and locate small parts, none of which were relevant to better understanding the cause of the crash.

“The project generated 38,426,205 2D points and 13,712,897 3D points from a combination of nearly 7,000 images.”

crash site investigation with drones

Using this method of reviewing the site allows investigators to see more deeply, with ability to repeatedly examine areas, identify patterns from an overhead view, and safely search for additional evidence that may not be accessible by vehicle or foot. Literally every inch of the site may be gone over.

crash site investigation with drones

Further, using a variety of computer-aided search tools, investigators may plug in an application to search for specific color parameters. For example, much of the fuselage is red in color, allowing investigators to search for a specific range of red colors. Pieces of fuselage as small as 1” were discovered using this method. Bright white allowed for finding some items, while 0-16 level black allowed for finding other small objects such as stickers, toolbox, and oil cans.

Using a tool such as the DTResearch 301 to capture the RTK geolocation information, we also use the DTResearch ruggedized tablet as a localized pointcloud scan which may be tied into the Pix4Dmapper application. Capturing local scan data from a terrestrial perspective with GCP’s in the image allow for extremely deep detail in small environments. This is particularly valuable for construction sites or interior scans, along with uses for OIS, etc.

Primary Considerations When Capturing a Scene Twin

  • GSD.​ This is critical. There is a balance between altitude and propwash, with all necessary safety considerations.
    Vertical surfaces. In the event of an OIS where walls have been impacted, the ability to fly vertical surfaces and capture them with a consistent GSD will go a long way to creating a proper model. Shadow distortion.​ If the scene is very large, time will naturally fly by and so will the sun. In some conditions, it’s difficult to know the difference between burn marks and shadows. A bit of experience and experimentation will help manage this challenge.
  • Exposure.​ Checking exposure prior to the mission is very important, particularly if an application like Pix4Dreact isn’t available for rapid mapping to check the data on-site.
    Angle of sun/time of day​. Of course, accidents, incidents, crime, and other scenes happen when they happen. However, if the scene allows for capture in the midday hours, grab the opportunity and be grateful. This is specifically the reason that our team developed night-time CSI/Datacapture, now copied by several training organizations across the country over recent years.
  • Overcapture.​ Too much overlap is significantly preferable to undercapture. Ortho and modeling software love images.
  • Obliques. ​Capture obliques whenever possible. Regardless of intended use, capture the angular views of a scene. When possible, combine with ground-level terrestrial imaging. Sometimes this may be best accomplished by walking the scene perimeter with the UA, capturing as the aircraft is walked. We recommend removing props in these situations to ensure everyone’s safety.

What happens when these points are put aside?

This is a capture of a scene brought to us for “repair,” as the pilot didn’t know what he didn’t know. Although we were able to pull a bit of a scene, the overexposure, too-high altitude/low GSD, and lack of obliques made this scene significantly less valuable than it might have been.

Not understanding the proper role or application of the UA in the capture process, the UA pilot created a scene that is difficult to accurately measure, lacking appropriate detail, and the overexposure creates difficulties laying in the mesh. While this scene is somewhat preserved as a twin, there is much detail missing where the equipment had the necessary specifications and components to capture a terrific twin. Pilot error cannot be fixed. Operating on the “FORD” principle, understanding that ​FO​cus, exposu​R​e, and ​D​istance (GSD) cannot be rectified/compensated for in post processing means it has to be captured properly the first time. The above scene can’t be properly brought to life due to gross pilot error.


Ultimately, the primary responsibility is to go beyond a digital twin of the scene, but instead offer deep value to the investigator(s) which may enhance or accelerate their investigations. Regardless of whether it’s a crash scene, insurance capture, energy audit, or other mapping activity, understanding how to set up the mission, fly, process, and export the mission is paramount.

Capturing these sorts of scenes are not for the average run n’ gun 107 certificate holder. Although newer pilots may feel they are all things to all endeavors benefitting from UA, planning, strategy, and experience all play a role in ensuring qualified and quality captures occur. Pilots wanting to get into mapping should find themselves practicing with photogrammetry tools and flying the most challenging environments they can find in order to be best prepared for environmental, temporal, and spatial challenges that may accompany an accident scene. Discovery breeds experience when it’s cold and batteries expire faster, satellite challenges in an RTK or PPK environment, planning for overheated tablets/devices, managing long flight times on multi-battery missions, or when winds force a crabbing mission vs a head/tailwind mission. Learning to maintain GSD in wild terrain, or conducting operations amidst outside forces that influence the success or failure of a mission only comes through practice over time. Having a solid, tried and true risk mitigation/SMS program is crucial to success.

We were pleased to close out this highly successful mission, and be capable of delivering a 3.5 GB geotiff for overlay on Google Earth, while also being able to export the project for investigators to view at actual ground height, saving time, providing a safety net in rugged terrain, and a digital record/twin of the crash scene that may be used until the accident investigation is closed.


●  2X Autel EVOII™ Pro aircraft

●  Autel Mission Planner software

●  FoxFury D3060 lighting

●  DTResearch 301 RTK tablet

●  Seko field mast/legs

●  Seko RTK antenna

●  Hoodman GCP

●  Hoodman Hoods

●  Manfrotto Tripod

●  Dot3D Windows 10 software

●  Pix4DMapper software

●  Luminar 4 software

LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones

LUTS, Log, 10Bit: Geeking Out on Camera Formats for Drones

It takes more than just a Part 107 to be a good drone service provider: customers require expertise in production, too.  If you’re confused about camera formats for drones and some of the newest options on the market, we’ve got you covered with this deep dive.  UAS expert and industry figure Douglas Spotted Eagle here at KukerRanken provides a detailed and expert explanation of 10 bit file formats, LOG formats and Look-Up Tables (LUTS.).

camera formats for drones
camera formats for drones

Camera Formats for Drones: LUTS, LOG, 10Bit

The latest trend in UA camera systems is an ability to record in 10bit file formats, as well as in LOG formats, which allow the use of Look-Up Tables (LUTs, often pronounced “Loots” and more generally pronounced “luhts”), providing significantly greater dynamic range. This greater dynamic range enables color matching, greater opportunity to color correct with higher quality formats such as those offered by Sony, Blackmagic Design, Canon, and others. Video does not have to be recorded in LOG to use a LUT, but using an input LUT on a semi-saturated frames could generate undesirable results.


A LookUp Table (LUT) is an array of data fields which make computational processes lighter, faster moving, and more efficient. They are used in a number of industries where intense data can be crunched more efficiently based on known factors.  For purposes of video, a LUT enables the camera to record in a LOG format, enabling deep color correction, compositing, and matching. There are many LUTs available, some are camera-specific while others are “general” LUTs that may be applied per user preference. A LUT might more easily be called  a “preset color adjustment” which we can apply to our video at the click of a button. Traditionally, LUTs have been used to map one color space to another. These color “presets” are applied after the video has been recorded, applied in a non-linear editing system such as Adobe Premiere, Blackmagic Resolve, Magix Vegas, etc.

Two types of LUTs are part of production workflows;

  • Input LUTs provide stylings for how the recorded video will look in a native format (once applied to specific clips). Input LUTs are generally applied only to clips/events captured by a specific camera. Input LUTs are applied prior to the correction/grading process.
  • Output/”Look LUTs” are used on an overall production to bring a consistent feel across all clips. “Look” LUTs should not be applied to clips or project until after all color correction/grading has been completed.

There are literally thousands of pre-built LUTs for virtually any scene or situation. Many are free, some (particularly those designed by well-known directors or color graders) are available for purchase.

In addition to Input/Look LUTs, there are also 1D and 3D LUTs. 1D LUTs simply apply a value, with no pixel interaction. 1D LUTs are a sort of fixed template of locked values.  3D LUTs map the XYZ data independently, so greater precision without affecting other tints are possible. For a deeper read on 3D vs 1D LUTs, here is a great article written by Jason Ritson.


This is a fairly complex question that could occupy deep brain cycle time. In an attempt to simplify the discussion, let’s begin with the native image.

Natively, the camera applies color and exposure based on the sensor, light, and a variety of calculations. This is the standard output of the majority of off-the-shelf UA systems.  This type of output is ideal for video that is not going to be heavily corrected, will be seen on the typical computer monitor or television display, and time is of the essence. These are referred to as “Linear Picture Profiles” and is the most common means of recording video. Cell phones, webcams, and most video cameras record in linear format. While this is most common, it also reduces flexibility in post due to dynamic range, bit-depth, and other factors. In linear profiles, the typical 8bit video has 256 values of color, represented equally across the color spectrum. The balance (equality) of pixel ranges does not allow the most effective use of dynamic range, limiting correction options in post. With linear picture profiles, bottom end/dark colors use equal energy as brighter exposures, so while color balance is “equal,” it is inefficient and does not allow the greatest range to be displayed for the human eye, specifically midtones and highlights.

To take the best advantage of a LUT, video should be recorded in a LOG format. It’s rare an editor would apply a LUT to linear video, as in most cases, color space, exposure, and dynamic range would be unrealistic and challenged.

LOG formats store data differently than linear picture profiles.

In the past, LOG formats were only accessible in very high-end camera systems such as the Sony CineAlta Series, and similar, high-dollar products. Today, many UA cameras offer access to LOG formats.

When the camera recording parameters are set to LOG (Logarithmic) mode, these native color assignments are stripped away, and the camera records data in a non-linear algorithm.

LOG applies a curve in the bottom end of the image (darks/blacks) and shifts the available space to the more visible areas of the color range where it is more efficient/effective.

Of course, this will create the illusion of a low-contrast, washed-out image during the recording process.

camera formats for drones
A-LOG image direct from camera LUT applied, no additional grading

“A-LOG” is Autel Robotics logarithmic recording format. A-LOG not only allows for greater dynamic range, it is also a 10-bit format, enabling much greater chroma push when color correcting or grading. Most manufacturers offer their own LOG formats.

Graded imageSplit screen comparing LUT, grade, and source image

Virtually every editing application today features the ability to import LUTs to be used with 8- 10-, or 12-bit LOG files. Autel and DJI are the two predominant UA manufacturers offering LOG capability to their users, and any medium lift UA that can mount a Sony, Canon, Blackmagic Design camera also would benefit from shooting in a LOG format.

Magix Vegas allows import of LUTs, and offers the ability to save a color correction profile as a LUT to be re-used, or shared with others.


Shooting video in LOG is much like choosing between .DNG or .JPG. If images/video are to be quickly turned around, and time, efficiency, or client desires are more important than taking the time to grade or correct images, there is no benefit to shooting video in LOG formats.  LOG, like DNG (or RAW) will simply slow the process. However, if images or video are to be matched to other camera systems, or when quality is more important than the time required to achieve the best possible image, then LOG should be used, to future-proof and preserve the best original integrity of archived footage. Of particular consideration are higher resolution formats, such as 2.7K, 4K, 6K, or 8K video. These formats will be most likely downsampled to HD for most delivery in the near future, and shooting in LOG allows for greatest re-use, or color preservation when down sampling. Without getting too deeply into the math of it, the 4:2:0 colorspace captured in a 4K or higher format will be 4:2:2 color sampled when delivered in HD, enabling significant opportunity in post-processing.

“Which is more committed to breakfast, the chicken or the pig? Baking colors into recorded video is limiting, and if appropriate, should be avoided.”


Many monitors today allow for input LUT preview prior to, and during the recording process. This will require some sort of HDMI (or SDI) output from the UA camera into the monitor.

Input LUTs can be applied temporarily for monitoring only, or they can be burned into files for use in editing when capturing Blackmagic RAW (when using BMD products). Bear in mind, once a LUT has been “baked in” to the recorded video, it cannot be removed and is very difficult to compensate.

camera formats for drones

Most external monitors allow shooters/pilots to store LUTs on the monitor, which offers confidence when matching a UA camera to a terrestrial or other camera. In our case, matching a Sony A7III to the UA is relatively easy as we’re able to create camera settings on either camera while viewing through the LUT-enabled monitor, seeing the same LUT applied (virtually) to both cameras.

Another method of seeing how a LUT will affect video can be found in LATTICE, a high-end, professional director and editors tool.


Most UA cameras today are capable of recording in h.264 (AVC) or h.265 (HEVC). Any resolution beyond HD should be always captured in h.265. H.264/AVC was never intended to be a recording format for resolutions higher than HD, although some folks are happy recording 2.7K or 4K in the lesser efficient format.  It’s correct to say that HEVC/h.265 is more CPU-intensive, and there are still a few editing and playback software apps that either cannot efficiently decode HEVC. However, the difference in file size/payload is significant, and quality of image is far superior in the HEVC format. A video created using HEVC at a lower bit rate has the same image quality as an h.264 video at a higher bit rate.

More importantly, AVC/h.264 does not support 10-bit video, whereas HEVC does, so not only are we capturing a higher quality image in a smaller payload, we’re also able to access 10bit, which can make a significant difference when correcting lower quality imagery, particularly in those pesky low light scenarios often found close to sunset, or night-flight images that may contain noise. Additionally, 10-bit in LOG format allows photographers to use a lower ISO in many situations, reducing overall noise in the low-light image. Last but not least, AVC does not support HDR, whereas HEVC does. Each camera is a bit different, and each shooting scenario is different, so a bit of testing, evaluation, and experience enables pilots to make informed decisions based on client or creative direction or needs.


Slower, older computers will struggle with the highly compressed HEVC format, particularly when 10-bit formats are used. Recent computer builds can decode HEVC on the graphics card, and newer CPUs have built-in decoding. However, not everyone has faster, newer computer systems available. This doesn’t mean that older computers cannot be used for HEVC source video.

Most editing systems allow for “proxy editing” where a proxy of the original file is created in a lower resolution and lower compression format. Proxy editing is a great way to cull footage to desired clips/edits. However, proxies cannot be color corrected with accurate results. The proxies will need to be replaced with source once edits are selected. In our house, we add transitions and other elements prior to compositing or color correcting. LUTs are added in the color correction stage.

camera formats for drones
10-bit A-LOG, no LUT applied10-bit A-LOG, LUT-only applied
10-bit A-LOG, LUT applied, Graded10-bit A-LOG, LUT applied, Graded Split Screen

The “Transcend” video displayed above is a mix of high end, DLSR, and UA cameras, captured prior to sunrise. Using 10-bit A-Log in h.265 format allowed a noise-free capture. The video received a Billboard Music award. We are grateful to Taras Shevchenko & Gianni Howell for allowing us to share the pre-edit images of this production.


Shooting LOG isn’t viable for every production, particularly low-paying jobs, jobs that require rapid turnaround, or in situations where the matched cameras offer equal or lesser quality image output. However, if artistic flexibility is desired, or matching high end, cinema-focused cameras, or when handing footage over to an editor who requires flexibility for their needs, then HEVC in 10-bit, LOG-mode is the right choice.

In any event, expanding production knowledge and ability benefits virtually any workflow and professional organization, and is a powerful step to improving final deliverables.

Does the Drone Industry Really Need 8K

Does the Drone Industry Really Need 8K?

Pro Read: As a leak indicates that Autel Robotics may be the first to offer a 6/8K camera on a drone, UAS expert and industry leader Douglas Spotted Eagle dives in to what the advantages of 8k may be – and if the drone industry is ready to take advantage of them.

Guest post by Douglas Spotted Eagle, Chief Strategy Officer – KukerRanken

In 2004, Sony released the world’s first low-cost HD camera, known as the HVR-Z1U. The camera featured a standard 1/3” imager, squeezing 1440×1080 pixels (anamorphic/non-square) pixels on to the sensor. This was also the world’s first pro-sumer camera using the MPEG2 compression scheme, with a color sample of 4:2:0, and using a GOP method of frame progression, this new technology set the stage for much higher resolutions and eventually, greater frame rates.

It’s “father,” was the CineAlta HDWF900, which offered three 2/3” CCDs, which was the industry standard for filmmaking for several years, capturing big hits such as the “Star Wars Prequel Trilogy”, “Once Upon a Time in Mexico”, “Real Steel”, “Tomorrowland”, “Avatar”, “Spykids” (1 & 2), and so many others.  The newer HDV format spawned from similar technology found in the HDWF900, and set the stage for extremely high end camera tech to trickle down into the pro-sumer space.

Overtime, camera engineers identified methods of co-siting more pixels on small imagers, binning pixels, or using other techniques to increase the capture resolution on small surfaces. Compression engineers have developed new compression schemes which brought forward AVC (h.263), MP4(h.264), and now HEVC/High Efficiency Video Codec(h.265), and still others soon to be revealed.

Which brings us to the present.

We have to roughly quadruplemegapixels to doubleresolution, so the jump from SD to HD makes sense, while the jump from HD to UHD/4K makes even more sense. Following that theme, jumping to 6K makes sense, while jumping to 8K is perfect theory, and nears the maximum of the human eye’s ability to resolve information.

At NAB 2018, Sony and Blackmagic Design both revealed 8K cameras and in that time frame others have followed suit.

During CommUAV and InterDrone, several folks asked for my opinion on 6 and 8K resolutions. Nearly all were shocked as I expressed enthusiasm for the format.

–          “It’s impossible to edit.”

–          “The files are huge.”

–          “No computer can manage it.”

–          “There is no where to show 8K footage.”

–          “Human eyes can’t resolve that resolution unless sitting very far away from the screen.”

–          “Data cards aren’t fast enough.”

And….so on.

These are all the same comments heard as we predicted the tempo of the camera industry transitioning from SD to HD, and from HD to 4K.  In other words, we’ve been here before.

Video cameras are acquisition devices. For the same reasons major motion pictures are acquired at the highest possible resolutions, and for the same reasons photographers get very excited as resolutions on-camera increase, so should UAS photographers. Greater resolution doesn’t always mean higher grade images. Nor does larger sensor sizes increase quality of images. On the whole, higher resolution systems usually does translate into higher quality images.

Sensor sizes are somewhat important to this discussion, yet not entirely critical. The camera industry has been packing more and more pixels into the same physical space for nearly two decades, without the feared increase in noise. Additionally, better noise-sampling/reduction algorithms, particularly from OEM’s like Sony and Ambarella, have allowed far greater reduction in noise compared to the past. Cameras such as the Sony A7RIV and earlier offer nearly noise-free ISO of 32,000!

Sensor sizes vary of course, but we’ll find most UAS utilize the 1/2.3, or the 1” sensor. (Light Blue and Turquoise sizes respectively, as seen below).

“Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust, pitting, spalling, or other damage which might be missed with lower resolutions or the human eye become apparent with a greater resolution.”


Generally, we’re downsampling video or photos to smaller delivery vehicles, for but one reason. In broadcast, 4:2:2 uncompressed color schemes were the grail. Yet, most UAS cameras capture a 4:2:0 color/chroma sample.  However, a 4K capture, downsampled to 1080 at delivery, offers videographers the same “grail” color schema of 4:2:2!

As we move into 6 or 8K, similar results occur. 8K downconverted to HD offers a 4:4:4 color sample.


We gain the ability to crop for post editing/delivery to recompose images without fear of losing resolution. This means that although the aircraft may shoot a wide shot, the image may be recomposed to a tighter image in post, so long as the delivery is smaller than the source/acquisition capture. For example, shooting 4K for 1080 delivery means that up to 75% of the image may be cropped without resolution loss.

As the image above demonstrates, it’s quite possible to edit 8K HEVC streams on a newer laptop. Performance is not optimal without a great deal of RAM and a good video card, as HEVC requires a fair amount of horsepower to decode. The greater point, is that we can edit images with deep recomposition. Moreover, we have more pixels to work with, providing greater color correction, color timing, and depth/saturation.

For public safety, this is priceless. An 8K capture provides great ability to zoom/crop deeply into a scene and deliver much greater detail in HD or 4K delivery.

The same can be said for inspections, construction progress reports, etc. Users can capture at a high resolution and deliver in a lower resolution.

Another benefit of 6 and 8K resolutions is the increase in dynamic range. While small sensors only provide a small increase in dynamic range, a small increase is preferable to no increase.

To address other statements about 6K and 8K resolutions; They human eye has the ability to see around 40megapixels, age-dependent. 8K is approximately 33megapixels. However, the human eye doesn’t see equal resolutions across the surface. The center of our eye sees approximately 8megapixels, where the outer edges are not as deep. High resolution does provide greater smoothing across the spectrum, therefore our eyes see smoother moving pictures.


Going well-beyond the human eye, higher resolutions are applicable to “computer vision,” benefiting mapping, 3D modeling, and other similar applications. Generally speaking, more pixels equals greater smoothness and geometry. As technology moves deeper into Artificial Intelligence, higher resolutions with more efficient codecs become yet even more important. Imagine an UAS equipped with an 8K camera inspecting a communications tower. Resolution is high, so small specs of rust or other damage which might be missed with lower resolutions or the human eye become more visible with a greater resolution. Now imagine that greater resolution providing input to an AI-aided inspection report that might notify the operator or manager of any problem. Our technology is moving beyond the resolution of the human eye for good reason.


Files from a 6 or 8K camera are relatively small, particularly when compared to uncompressed 8K content (9.62TB per hour). Compression formats, known as “Codecs” have been improving for years, steadily moving forward. For example, when compressions first debuted in physical form, we saw Hollywood movies delivered on DVD. Then we saw HD delivered on Blu-ray. Delivery over disc formats is dead, and now we’ve moved through MPG2, AVC, AVCHD, H.264, and now H.265/HEVC. In the near future we’ll see yet even more compression schemes benefitting our workflows whether delivered via streaming or thumbdrive.  VVC or “Versatile Video Codec”will be the next big thing in codecs for 8K, scheduled to launch early 2022.

Unconventional h.264 and H.265/HEVCare currently being used as delivery codecs for compressed 6 and 8K streams. 8K has been successfully broadcast (in testing environments) at rates as low as 35Mbps for VOD, while NHK has set the standard at 100Mbps for conventional delivery. Using these codecs, downconverting streams to view OTA/Over The Air to tablets, smartphones, or ground station controllers is already possible. It’s unlikely we’ll see 8K streaming from the UAS to the GSC.

U3 Datacards are certainly prepared for 6 and 8K resolutions/datastreams; compression is what makes this possible.  The KenDao 8K and Insta 8K 360 cameras both are recording to U3 cards, available in the market today.

It will be some time before the average consumer will be seeing 8K on screens in their homes. However, 8K delivered for advertising, matching large format footage being shot on Weapon, Monstro, Helium or other camera formats may be less time-consuming when using 8K, even from smaller camera formats carried on an UAS (these cameras may easily be carried on heavy-lift UAS).

Professional UAS pilots will benefit greatly from 5, 6, or 8K cameras, and should not be shy about testing the format. Yes, it’s yet another paradigm shift in an always-fluid era of aerial and visual technology.  There can be no doubt that these higher resolutions provide higher quality in any final product.  Be prepared; 2020 is the year of 5, 6, and 8K cameras on the flying tripods we’re using for our professional and personal endeavors, and I for one, am looking forward to it with great enthusiasm.

Part 91, 101, 103, 105, 107, 137: WHAT’S THE DIFFERENCE?

All these FARs, what’s a drone pilot to do in order to understand them? Do they matter?


In virtually every aviation pursuit except for sUAS, an understanding of regulations is requisite and part of most testing mechanisms.  As a result, many sUAS pilots holding 

a Remote Pilot Certificate under Part §107 are woefully uninformed, to the detriment of the industry.

Therefore, sUAS pilots would be well-served to inform themselves of how each section of relevant FARs regulate components of aviation.

Let’s start by digging into the intent of each Part.

  • §Part 91 regulates General Operating and Flight Rules.
  • §Part 101 regulates Moored Balloons, Kites, Amateur Rockets, Unmanned Free Balloons, and some types of Model Aircraft.
  • §Public Law Section 336 regulates hobby drones as an addendum to Part 101.
  • §Part 103 regulates Ultra-Light Vehicles, or manned, unpowered aviation.
  • §Part 105 regulates Skydiving.
  • §Part 107 regulates sUAS
  • §Part 137 regulates agricultural aircraft


Part §91

This portion of the FARs is barely recognized, although certain sections of Part 91 may come into play in the event of an action by the FAA against an sUAS pilot. For example, the most concerning portion of Part 91 is  91.13, or “Careless or Reckless Operation.” Nearly every action taken against sUAS pilots have included a charge of 91.13 in the past (prior to 107).

Specific to drone actions, The vast majority of individuals charged have also included the specific of a 91.13 charge.

sUAS pilots whether recreational or commercial pilots may be charged with a §91.13 or the more relevant §107.23 (reckless)

It’s pretty simple; if there are consequences to a pilot’s choices and actions, it’s likely those consequences also included a disregard for safety or planning, ergo; careless/reckless. The FAA has recently initiated actions against Masih Mozayan for flying his aircraft near a helicopter and taking no avoidance action. They’ve also taken action against Vyacheslav Tantashov for his actions that resulted in damage to a military helicopter (without seeing the actual action, it’s a reasonable assumption that the action will be a §91.13 or a §107.23 (hazardous operation).

Other parts of Part 91 are relevant as well. For example;

  • §91.1   Applicability.

(a) Except as provided in paragraphs (b), (c), (e), and (f) of this section and §§91.701 and 91.703, this part prescribes rules governing the operation of aircraft within the United States, including the waters within 3 nautical miles of the U.S. coast.

The above paragraph includes sUAS.  Additionally, Part 107 does not exclude Part 91. Airmen (including sUAS pilots) should be aware of the freedoms and restrictions granted in Part 91.

§91.3   Responsibility and authority of the pilot in command.

(a) The pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft.

(b) In an in-flight emergency requiring immediate action, the pilot in command may deviate from any rule of this part to the extent required to meet that emergency.

(c) Each pilot in command who deviates from a rule under paragraph (b) of this section shall, upon the request of the Administrator, send a written report of that deviation to the Administrator.

§91.7   Civil aircraft airworthiness.

(a) No person may operate a civil aircraft unless it is in an airworthy condition.

(b) The pilot in command of a civil aircraft is responsible for determining whether that aircraft is in condition for safe flight. The pilot in command shall discontinue the flight when unairworthy mechanical, electrical, or structural conditions occur.

§91.15   Dropping objects.

No pilot in command of a civil aircraft may allow any object to be dropped from that aircraft in flight that creates a hazard to persons or property. However, this section does not prohibit the dropping of any object if reasonable precautions are taken to avoid injury or damage to persons or property.

§91.17   Alcohol or drugs.

(a) No person may act or attempt to act as a crewmember of a civil aircraft—

(1) Within 8 hours after the consumption of any alcoholic beverage;

(2) While under the influence of alcohol;

(3) While using any drug that affects the person’s faculties in any way contrary to safety; or

Sound familiar?

SubPart B also carries relevant information/regulation with regard to operation in controlled airspace, operations in areas under TFR ((§91.133), operations in disaster/hazard areas, flights during national events, lighting (§91.209)

PART 101

Part §101 has a few applicable sections.

Subpart (a) under §101.1 restricts model aircraft and tethered aircraft (balloons). Although subpart (a.4. iiv) is applicable to balloon tethers, there is argument that it also applies to sUAS. Subpart (a.5.iii) defines recreational flight for sUAS/model aircraft.

Finally, §101.7 re-emphasizes §91.15 with regard to dropping objects (may not be performed without taking precautions to prevent injury or damage to persons or property).  Public Law 112-95 Section 336 (which may be folded into a “107 lite” version), clarifies sections not added to Part 101.

Bear in mind that unless the pilot follows the rules and guidelines of a NCBO such as the AMA, AND the requirements of that NCBO are met, the flight requirements default to Part 107 requirements.

PART §103

Part §103 regulates Ultralight vehicles (Non powered, manned aviation)

Although no component of Part §103 specifically regulates UAV, it’s a good read as Part 103 contains components of regulation found in Part 107.

PART §105

Part §105 regulates Skydiving.

Part §105 carries no specific regulation to sUAS, an understanding of Part 105 provides great insight to components of Part 107. Part 107 has very few “new” components; most of its components are clipped out of other FAR sections.

PART §107

Although many sUAS pilots “have their 107,” very few have actually absorbed the FAR beyond a rapid read-through. Without a thorough understanding of the FAR, it’s difficult to comprehend the foundation of many rules.

PART §137

Part 137 applies specifically to spraying crops via aerial vehicles.

Those looking into crop spraying via sUAS should be familiar with Part 137, particularly with the limitations on who can fly, where they can fly, and how crops may be sprayed.
One area every ag drone pilot should look at is §137.35 §137.55 regarding limitations and business licenses.

The bottom line is that the more informed a pilot is, the better pilot they can be.  While there are many online experts purporting deep knowledge of aviation regulations and how they specifically apply to sUAS, very few are familiar with the regulations in specific, and even less informed as to how those regulations are interpreted and enforced by ASI’s. We’ve even had Part 61 pilots insist that the FSDO is a “who” and not a “what/where.” Even fewer are aware of an ASI and how they relate to the world of sUAS.

FSIM Volume 16

It is reasonably safe to say that most sUAS pilots are entirely unaware of the Flight Standards Information Management System, aka “FSIMS.” I’ve yet to run across a 107 pilot familiar with the FSIMS, and recently was vehemently informed that “there is nothing beyond FAR Part 107 relative to sUAS. Au contraire…

Familiarity with the FSIMS may enlighten sUAS operator/pilots in how the FAA examines, investigates, and enforces relevant FARs.

Chapter 1 Sections 1, 2  and 4 are a brief, but important read, as is Chapter 2, Section 2.

Chapter 3 Section 1 is informational for those looking to apply for their RPC Part 107 Certificate.

Chapter 4 Sections 2, 5, 7, 8 are of particular value for commercial pilots operating under Part 107.

Volume 17, although related only to manned aviation, also has components related to 107, and should be read through (Chapters 3 & 4) by 107 pilots who want to be informed.

Gaining new information is always beneficial, and even better if the new information is implemented in your workflow and program. Become informed, be the best pilot you can be, and encourage others to recognize the value in being a true professional, informed and aware.


Six ways drones have proven themselves as a tool for the AEC, Surveying, and mapping industries.

Drones and unmanned aircraft in AEC scanning and construction

Six ways drones have proven themselves as a tool for the AEC, Surveying, and mapping industries

Drones and unmanned aircraft in AEC scanning and construction process are becoming more common.  Unmanned aircraft, or drones are becoming much more common on today’s project sites. many companies in the AEC, Surveying and mapping industries are utilizing these aircraft daily. So how do drones capture data? What are professionals getting out of said data? What makes a drone into a valuable tool versus a toy?

UAS technology has advanced to a point where the aircraft; while still very sophisticated, are quite simple to operate. They utilize; altimeter’s, magnetometers, inertial measurement units, GNSS (GPS) and radio transmitters to control the flight operations, but the end-user would never know it. These sensors and more are all managed behind the scenes so well that an operator can takeoff from any point, fly a “mission” which involves several tasks collecting data, avoid collisions from unexpected obstacles, know when they have just enough battery to return home safely and land all in a constantly changing environment, 100% autonomously starting from a single tap for initiation. Flying a drone is fun but unless you’re collecting data it brings no value. There are many sensors that can be attached to unmanned aircraft such as LiDAR and Gravitometers but in this article we are primarily going to address cameras and their use in Photogrammetry.


When you photograph an object from two different angles and add some Trigonometry, three dimensional measurements can be calculated.  The entire process is simple and automated.  A 3D model from aerial imagery is nothing new. Photogrammetry can be summarized as; the art, science and technology of making precise measurements from photos, and has been around since the mid 1800’s.

The whole process works like this: The distance (f) from a Camera Lens to its sensor is proportional to the distance (h) from said camera lens to objects being photographed. This property is written into several equations that photogrammetrists use to calculate things such as the scale of a photo and even the elevation of specific points or pixels in aerial photographs.

When two overlapping photographs are in correct orientation relative to each other, a Stereopair or Stereoscopic Imagery exists.  This imagery creates perspective on objects within the overlap of the photographs and is the principle behind all forms of 3D viewing.

Stereoscopic Imagery drones and unmanned aircraft in AEC scanning and construction

As mentioned above, drone users can pre-program routes to fly over their intended mapping area. Photos are taken with specific overlap which is computed based on altitude, speed, and the resolution of their camera sensor. Drones use the onboard sensors like GNSS or even real time corrected positioning (RTK) to both georeference the photos taken, control the flight of the by changing the RPM’s of the individual motors. This data is all carried over in the image files where they are further processed.

Today’s Photogrammetry softwares use these mathematical principles to orient, scale and combine photographs and data. The software will ultimately generate Point Clouds, Orthorectified (measurable) photos and 3D models with varying output types.

Project Output drones and unmanned aircraft in AEC scanning and construction

Drones and unmanned aircraft in AEC and Construction:  Valuable Applications for AEC, Surveying, and mapping.

Surveying and Mapping. The use of drones and unmanned vehicles in surveying and mapping is almost self-evident. Surveyors and Cartographers have used Aerial Photography dating back about as far as the invention of the airplane. What may not be immediately apparent are the costs to purchase a survey-quality UAS and required software is a small investment in comparison to traditions surveying equipment and the man hours saved easily pays for itself.   Point Clouds and Orthometric photos are great for drafting planimetric features and generating TIN surfaces to represent topography. Whether you’re mapping for design data, a feasibility study, GIS, or performing an ALTA/ACSM survey, using unmanned vehicle to capture data may be significantly more efficient than traditional means.

Reality Capture, which is just that; capturing the reality of the current conditions of a project site. This is a great practice for design, bidding, marketing and simply helping clients “capture the vision.” This may be as simple as viewing an oblique photograph or as complicated as combining a designed structure with a 3D mesh and viewing it in VR. I personally get a kick whenever I see a IFC model inserted in a point cloud.

Building Information modeling (BIM). It would be hard to mention reality capture without mentioning BIM. While flying a drone indoors is doable its not very practical so this is not what we are referring to here. Many companies today, especially in the design-build world are utilizing BIM for much more than building modeling. They are integrating models in all their civil design as well.  These departments are already using laser scanning and are familiar with point clouds so adding a UAS into their tool chest is a natural move. Drones are great for capturing data that can be used for clash detection, QC, and as-built drawings.

Pre-Construction and Takeoffs are a major part of heavy civil construction. When it comes to moving dirt, knowing exactly what must be done can make all the difference in winning a bid, making a profit or losing your shorts. This is done when companies are bidding on projects, but the same process occurs over often in design builds and any time a RFI or change order comes up. Capturing data to that represent the existing site condition is key when building a model and matching existing roadway and other civil tie-in points. Using a drone is a great way to make this happen.

Project Output2 drones and unmanned aircraft in AEC scanning and construction

Project Management. Unmanned Aircraft may be utilized for many processes in project management. Creating progress reports and viewing current conditions may be the most basic use and might just be the most beneficial when it comes to decision making. Billing on some projects is solely based on materials moved and/or installed. This makes tracking linear feet, area, and volumes the bottom line. Some other overlooked uses may include, creating safety plans and incident reports, public involvement, and training. There are also various other project management uses above.

Inspections. Drones are one of the best tools utilized in inspections. Often an environment is not safe for a person such as inspecting a high wall in an open pit mine; or the situation may not be as efficient for an individual such as climbing versus flying to inspect bolts on a suspension bridge. When we apply the use of Infrared /thermals sensors to unmanned aircraft they are capably much more. Infrared light is absorbed by water making it possible to discover moisture that may be invisible to the naked eye. This is great for leak detection among other things. Thermal makes it possible to view and analyze heat signatures. This is often used to find areas of heat loss in anything from mechanical to thermal applications.

One of the biggest challenges today’s companies in the AEC, Surveying and Mapping industry face is a shortage of manpower. The only way to overcome a shortage in manpower is to innovate. Many choosing to innovate are looking to drone to solve their problems. Two trends I’ve noticed in helping companies develop their UAS applications is that they may start with a particular expectation in mind and one drone, but they always utilize their UAS data more than they anticipated and want to expand their drone fleet. I believe UAS technology is one of the best investments for a company in these industries to make. It is very apparent to me that Unmanned Aircraft are a major focus in developing technology. They are a powerful tool and not a toy.

By Bryan Worthen Kuker-Ranken SLC



Kuker-Ranken has been in business for nearly 100 years; Customer Service is our top priority, whether precision instruments, unmanned aircraft/drones, or construction support supplies.
Call us today for pricing on drones, training, and service! (800) 454-1310