Fly By Wire Air is a one-stop shop for the aviation enthusiast. You will find aviation apparel, RC hobby planes, items for the historic aviation buff and even products and services for amateur pilots. We hope you will enjoy visiting our site. When you think of flying – Fly By Wire.
In 1939 Trans World Airlines was becoming a major competitor with Pan American Airlines for the emerging overseas route service. While TWA contracted with Lockheed to develop an aircraft to rival the performance and capacity of the Boeing Stratoliner, a major stockholder of TWA requested Lockheed to build an even greater plane-one which would ultimately define both an airline and an era of aviation.
Though Lockheed had been working on the L-044 Excalibur since 1937, Howard Hughes, the majority stockholder of Trans World, requested Lockheed develop an even more capable aircraft with a forty passenger capacity and a range of 3,500 miles. The new design, the L-049 Constellation, was a radical departure from previous airliners. The tripletail configuration kept the aircraft’s height low enough to fit in existing hangars. The wing layout was similar to another Lockheed plane, the P-38 Lightning. The L-049 featured such innovations as hydraulically boosted controls and a de-icing system used on wing and tail surfaces and mounted tricycle landing gear. The Constellation had an impressive performance for its day, being able to attain a maximum speed of 375 mph. with a cruising speed of 340 mph. – faster than many fighters of the era, with a service ceiling of 24,000 ft.
While intended for use as an airliner, the L-049s which entered service for TWA in January 1943 were quickly converted to military transports with the USAAF ordering 202 aircraft. The military designation, C-69, was used primarily as a long-range troop transport. Though the C-69 was successful in its role, only 22 aircraft were produced during the war. A number remained in service with the USAF into the 1960s, ferrying relocating military personnel. Lockheed even had plans to develop the L-049 as a long-range bomber (XB-30), but the design was never pursued.
Following World War II the Constellation began its heyday. USAAF C-69 transports were completed as civil airliners with TWA accepting its first aircraft in October 1945, initiating its first transatlantic flight from Washington DC to Paris in December of that year. During the late 1940s, the Constellation was upgraded several times to increase fuel capacity and speed. Finally, in early 1951 the Super Constellation was introduced. The Super Connie was extended 18.4 ft. over the L-1049 (L-049). to expand passenger capacity to ninety- two seats with a cruising speed of 305 mph. and a range of 5,150 miles. With auxiliary wing-tip fuel tanks, the Super Constellation could fly non-stop between New York and Los Angeles. Some pilots used to shorter runs began to complain about long days. An early problem with the 1049 Model was excessive exhaust gas flaming-sometimes past the trailing wing edge. Once the exhaust problem was corrected, the Super Connie became a highly successful airliner.
In 1955 the Constellation underwent additional updates. Though still called the Super Constellation, the Model 1649 aircraft was first designated the Super Star Constellation, finally evolving into the Starliner name by Lockheed. The Starliner was the most extensive modification of any Constellation models. The Starliner had features such as fully reclining seats for long flights, a more precise cabin temperature control, and ventilation, as well as state of the art noise insulation. The Starliner had outside improvements which included a longer and narrower wing, nearly doubling the capacity of the original Connie with twice the range at maximum payload-enabling it to reach any major European air hub non-stop from US airports. The Model 1649 also has the distinction of being the fastest piston-engined airliner flown at ranges of over 4,000 miles.
The Constellation served a number of military roles, in addition to a troop transport. In 1948 the USAF placed an order for ten Constellation transport aircraft (C-121). Several of these were deployed in support of the Berlin Airlift later that year. Six of the planes were later reconfigured to VIP transports (VC-121), one of which was used by Dwight Eisenhower as NATO Chief Of Staff. Eisenhower was so impressed with the plane, he named it Columbine. When he became President he was assigned another VC-121, which he named Columbine II. In the early 1950s, the US Navy, Air Force, and Marine Corps ordered C-121s mounted with radar domes on top to provide long-range radar for surface ships, as well as surveillance radar for command and control of aircraft. In the early 1960s, EC-121s briefly performed an anti-submarine role for the US Navy.
By the end of the 1950s, the Constellation became an aviation icon. It was in service with more than a dozen airlines, quickly becoming the flagship of Trans World Airlines. The Connie was in service with both the US military and several other government agencies, with duties ranging from tracking smugglers to hurricanes. Though expensive to build due to its tapered fuselage, the Constellation was a graceful aircraft. While being rapidly phased out by the major airlines in 1961 in favor of newer jetliners such as the Boeing 707 and Douglas DC-8, the Connie was still in use with a number of regional airlines with 856 examples built. Howard Hughes gamble in 1939 had paid off in a big way.
Though flying an rc model can be a fun activity, certain safety considerations must be observed in order to make the flight both a safe and enjoyable experience. During this blog, we’ll take a look at buying an rc plane or helicopter from the safety standpoint, as well as techniques to promote safe flying.
Real aircraft must undergo a pre-flight checklist, which is also a good philosophy for radio control aircraft. The pilot must make sure the rudder, ailerons, and elevators are functioning properly, with both the receiver battery and radio fully charged. The problem with rc planes, as opposed to rc helicopters, is the center of gravity. The center of gravity is a point at which the plane needs to balance in order to fly well. The center of gravity for a plane with a tail can be as far back as 32% from the nose of the plane of the plane, though the operator may still have to make it balance. While placement of the battery and radio can compensate for any CG imbalance, it’s always desirable to have both a light nose and tail section of the aircraft, with adjustments made at the center of the fuselage. A properly balanced plane will be more responsive to commands and use less fuel/battery charge. Flying wing designs have a common center of gravity at 23% back from the nose.
Before launching the plane, be sure the correct propellers are installed. The thickest section of the prop should be facing toward the front, The rc pilot can determine the front of the blade by manufacturer lettering. The plane will still fly with the prop(s) mounted backward, though at about a third of the power of a front mounted blade since the thicker front section displaces more air. Better quality props are more rigid, and thus more stable in flight – especially at high rpms. They are also less likely to flatten out over extended use. Getting a good launch of the aircraft is more difficult than it appears. Inexperienced rc pilots have a tendency to spin the plane, often a cause of crashes. If one wingtip is moving faster than the other, it will have more air over the wing, so the plane will roll towards the slower wing. The correct procedure is to release the plane when both wings are level and moving in the same direction at the same speed. If the model is launched at too steep an angle, it will experience an immediate stall.
Transmitters are an important element of rc flight. Many rc pilots have a tendency to fly their planes by their thumbs. Clutch the transmitter sticks on the side with the elevons or elevator-ailerons control. This offers the rc pilot more than one orientation to the controls and prevents accidental maneuvers of the rc plane. Don’t jerk the control sticks, but rather use a gradual motion from which to control the model. Proper antenna angle is another factor, since there may be local interference, which affects signal quality. Fly the plane at a close distance, using different antenna angles to determine the optimum signal. While more recent rc planes are equipped with a homing device, which returns the plane to the transmitter if the model experiences signal or line of sight interference, it’s always best to fly your rc plane no farther than your field of view. A three channel transmitter with throttle, rudder and elevator controls is usually the best for a beginner. Speaking of planes, the most important decision facing beginning rc pilots is choice of aircraft. The hard fact is the plane will experience a number of crashes until the pilot becomes more proficient. Foam is a relatively inexpensive material and easy to repair if the rc plane is damaged. While a foam aircraft construction is not the most pleasing to the eye, it provides the rc model beginner with a practical means of getting into the air. RC models may be purchased in either ready to fly (RTF) or in kit form, which must be assembled. Building your own model has the advantages of learning the parts and operating systems of the plane, as well as a lower cost. RC model planes may be powered by either gasoline engines or lithium polymer (LIPO) batteries. The use of lipo batteries has increased drastically in radio control use over the last ten years. They offer near gasoline engine performance while being more compact, with little or no maintenance.
The use of radio control aircraft, quadcopters and drones have increased exponentially over the last fifteen years. Near collisions between drones and passenger aircraft now run into the hundreds each year, with the FAA receiving in excess of 100 reports per month. While most drones weigh less than ten pounds and have a limited altitude, heavier and more capable machines are the rise. For example, even a collision between a lightweight drone and a jetliner could result in millions of dollars if the jetliner sustained damage to either the engine or control surfaces. Though the FAA has a regulation in effect for four years making it illegal to fly a drone within five miles of an airport and limiting the altitude to 400 ft., many operators who use drones in their business pay scant attention. A year later the FAA enacted a five dollar registration fee for all drones weighing more than half a pound. While ineffective at tracking drones, it may get the attention of some operators. For all the electronics and regulations, perhaps the best source of rc model safety is common sense in their use.
Pretend you’re the pilot of a large jetliner. You’ve completed pre-flight checks, both inside and outside, and are ready for takeoff. As you climb, the plane begins to vibrate and then pitch to one side. The number two engine then separates and you are faced with a decision – jettison the remaining fuel on the aircraft or make a heavy landing with fuel on board. While engine separations are not frequent occurrences of air travel, they can have tragic consequences for both the plane and the surrounding area. During the course of this blog, we’ll review two key cases involving such incidents.
In May 1979 a McDonnell Douglas DC-10 (Flight 191) was making a regularly scheduled passenger flight from O’Hare International Airport in Chicago to Los Angeles International Airport. Moments after takeoff, the aircraft plummeted downward, killing all 258 passengers along with the crew of thirteen and two on the runway. A subsequent investigation by the FAA revealed the number one engine separated from the left wing, flipping over the top and then landing on the runway. During the separation from the wing, the engine severed several hydraulic lines which locked the leading edge wing slats into place, as well as damaging a three-foot section of the wing. As the plane began to climb, it experienced a state of uncontrolled aerodynamics, in which the left wing provided minimal lift compared to that of the right wing while the engine was at full throttle. This condition caused the aircraft to roll abruptly to the left, reaching a bank angle of 112 degrees before crashing.
While the cause of the DC-10 engine loss was later determined to be due to a damaged pylon structure connecting the engine to the wing, several other factors also played a role in the crash. The hydraulic system powered by engine number one actually failed but ran from motor pumps connecting it to the engine three systems. While hydraulic system three was also damaged, it continued to provide pressure until the crash in spite of leaking fluid. Electrical problems were also a factor in the crash of Flight 191. The number one electrical bus, attached to the number one engine, failed, resulting in several electrical systems going offline including the flight captain’s instruments, stick shaker and wing slat sensors. As a result of the partial electrical failure, the flight crew only received a warning about the number one engine failure – not its loss. Though the crew had a closed circuit television screen behind the pilot from which to view the passenger compartments, it too was subject to the loss of power from the engine. After the Flight 191 incident and three other DC-10 crashes during the 1970s, a number of major airlines began to phase out the DC-10 in the early 1980s in favor of newer and more fuel- efficient jetliners such as the Boeing 757 and 767. While the phaseout had more emphasis on fuel efficiency, the safety of the aircraft cast a cloud over its service.
The DC-10 wasn’t the only wide-body jet to experience engine separation. In October 1992 an El Al Israeli Airlines Boeing 747-200 cargo plane (Flight 1862) with three crew members and one passenger on board, began a flight from John F. Kennedy Airport , New York to Ben Gurion International Airport, Tel Aviv with an intermediate stop at Schiphol Airport, Amsterdam. Weather conditions were favorable at the time of departure with all pre-flight checks performed, with no defects found. About ten minutes out of Schiphol, the flight data recorder indicated both engines 3 and 4 and their connecting struts had left the aircraft. The co-pilot transmitted an emergency call to Schiphol, requesting a return to the airport. However, the aircraft could not make a straight-in approach, due to both altitude and proximity to the airport. Therefore, the air traffic controller had to vector the El Al plane back to the airport by flying a pattern of descending circles to lower the altitude for a final approach. About five minutes into the flight pattern, the flight crew informed the controller of the loss of engines three and four and were beginning to experience flap control problems. The controller directed a new heading to the flight crew, but noticed the plane was taking 30 seconds to change headings. About three minutes later, the flight crew informed air traffic control they were receiving audible warnings indicating a lack of control and low ground proximity. Approximately twenty-five seconds later, the aircraft crashed into an eleven-story apartment building, about seven miles from Schiphol Airport.
Both number 3 and number 4 engine struts were recovered from Naarden Harbour, just east of Amsterdam with both engines attached to the struts. Remaining parts of the aircraft were located within a thousand foot radius of the impact. From an analysis of the parts and their placement, investigators were able to determine the number 3 engine separated first, traveling in an outboard direction, striking engine 4 and causing it and the supporting strut to separate from the plane. The engine struts or pylons are designed as two-cell torque boxes absorbing vertical, horizontal and torsional thrust loads to the wing, acting as an aerial shock absorber. The Boeing 747 pylon was supported internally by five fuse pins, which provide enough strength to hold the pylons in place with the exception of extreme loads, in which the pins fail, allowing the engine to break away without damaging the wing fuel tanks. This philosophy was adopted by Boeing from experiences with the earlier 707 and 727 models, in which a number of incidents of both in-ground and mid-air engine separations occurred. The crash of the El Al jetliner was attributed to a failure of a center fuse pin in the number 3 engine strut. The pin cracked due to metal fatigue and was a bottle bore design. The FAA issued a directive in 1979 requiring airlines to conduct inspections of the fuse pins every 2,500 flight hours as the bottle design was prone to fail at that point. The El Al 747 was one of a few aircraft which had not replaced their bottle pin units. As a result of the El Al crash and two other 747 crashes, the FAA mandated a retrofit of all Boeing 747 wing struts in 1995. The new strut design offered increased protection in the event of an engine separation, while still using fuse pins to protect the wing tank from damage during ground impact.
As the two previous cases indicate, engine separations may result from a number of problems. Sometimes it’s a matter of faulty parts, while lack of proper maintenance plays a role in others. The overall design of the aircraft itself may be a factor. However, the safe operation of an aircraft requires a continual interplay of aviators, air controllers, engineers and the flying public to promote flight safety.
In late 1945 the USAAF was at a crossroads. While the B-29 Superfortress was a capable platform in carrying the war to Japan, future requirements dictated an aircraft of intercontinental range, in excess of five thousand miles. The Convair B-36 Peacemaker met this requirement, but would not enter service for three more years. Further complicating matters, General Curtis LeMay and several other forward thinking generals were considering a jet powered bomber. However, within a few years, the generals and engineers got together and designed a truly great jet bomber – the Boeing B-52 Stratofortress. During this blog we will tell the story of the B-52, its development and its long service record with the USAF.
In addition to the range requirements of the aircraft, other performance characteristics specified by the Air Material Command in 1946 were a cruising speed of 300 mph. at an altitude of 34,000 ft., with a minimal payload of 10,000 lbs with five or six 20mm. gun turrets. The AMC issued bids later that year with Boeing, Glen L. Martin and Consolidated Aircraft submitting proposals. The Air Force accepted the Boeing proposal, an aircraft powered by six turboprop engines with a range of 3,110 miles. The Boeing plane, designated Model 462, was a straight-winged aircraft with a gross weight of 360,000 pounds – a heavy plane for its day. As a result of the weight issue, the Air Force began to have doubts about the ability of the aircraft to successfully perform its mission. Boeing then offered a smaller follow-up design, Model 464, having four engines and a 230,000 pound gross weight. While the 464 aircraft was deemed acceptable, the Air Force changed its requirements within a few months to a plane having a 400 mph cruising speed, with a 300,000 pound gross weight. Additionally, the Air Force wanted an aircraft with a range of twelve thousand miles, capable of delivering a nuclear weapon. These modifications increased the gross weight of the plane to 480,000 lbs.
Boeing responded by proposing two bombers, Model 464-16 and Model 464-17. Both planes were four engine turboprop designs, with the Model 16 being a nuclear only aircraft carrying a ten thousand lb. payload. The Model 17 bomber was a conventional bomber, able to mount a 9,000 lb. payload. By mid 1947 the Model 17 aircraft was deemed acceptable by the Air Force, except for the range requirement. By now, designated the XB-52, the aircraft offered only marginal performance in speed and range over the Convair B-36, which was about to enter service. The Air Force then postponed the project for six months in order to evaluate its potential. After a series of intense discussions between Boeing and the Air Force, the XB-52 project was back on track in January 1948, with Boeing urged to include the latest aviation innovations in the bomber design such as jet engines and aerial refueling. In May 1948, jet engines were substituted for turboprops which satisfied the Air Force. However, the Air Force still wanted a turboprop design, since jet engines of the era lacked fuel efficiency. October 1948 proved to be a crucial month for the XB-52 project. Boeing engineers George Schairer, Art Carlsen and Vaughn Blumenthal presented a refined turboprop design to Colonel Pete Warden, Director of Bomber Development for the USAF. After reviewing the proposal, Warden asked the Boeing design team if they could prepare a proposal for a four engine turbojet bomber. The following day Colonel Warden scanned the design, requesting an improved version. After returning to their hotel room, Schairer, Carlsen and Blumenthal were joined by Ed Wells, Boeing Vice President of Engineering, in addition to two other Boeing engineers, Bob Withington and Maynard Pennell. After eight hours of intense deliberation, the Boeing team had designed an entirely new airplane. The new concept of the XB-52 had 35 degree swept wings, based on the B-47 Stratojet, with eight engines paired in four pods below the wings with bicycle landing gear and outrigger wheels underneath the wingtips. The XB-52 also had flexible landing gear, which could pivot 20 degrees from the aircraft centerline to compensate for crosswinds upon landing. Warden approved the design the following week and the Air Force signed a contract with Boeing in February 1951 for an initial production run of 13 B-52As.
When the B-52 entered service in 1955, it was assigned to the Strategic Air Command (SAC) to deliver nuclear weapons under the doctrine of massive retaliation. Carrying a 50,000 lb. payload coupled with the capability to fly nearly half way around the globe, the Stratofortress was ideally suited for its role and soon became the standard for future bomber aircraft. Three B-52s from March AFB set a record around the world flight in 1957. However, it had its share of teething troubles, as with all aircraft. For example, the split level cockpit had climate control problems, while the pilot and co-pilot had sunlight exposure on the upper deck, the navigator and observer nearly froze on the lower deck. Early B-52 models were often grounded due to both electrical and hydraulic issues, with the Air Force assigning contractor teams to B-52 bases, troubleshooting problems as they arose.
By the late 1950s, advances in Soviet surface to air (SAM) missile capabilities brought about a major upgrade in the electronic countermeasure capabilities of the B-52. This situation also caused SAC to change its philosophy from high altitude bombing to low level penetration. The switch to low altitude bombing required a number of modifications to B-52 component parts. Such features as an updated radar altimeter, structural reinforcements, modified equipment mounts, an enhanced cooling system, as well as terrain avoidance radar were necessary to support missions flown at altitudes as low as 500 ft. By the end of the decade, B-52 capabilities increased with the addition of the Quail and Hound Dog missile systems. The Quail, a decoy missile, was carried in the aft bomb bay of the B-52 and launched while in flight to the target. The missile was programmed by the crew to match the speed and altitude of the B-52, thus confusing Soviet radar. Each Stratofortress carried four of these, in addition to the regular nuclear payload. North American’s entry, the AGM-28 Hound Dog was an offensive missile launched from the B-52 to carry a nuclear warhead to its target. With a mach 2 speed and an altitude variance of from 500 to 60,000 ft., the Hound Dog was able to penetrate enemy air defenses to a range of 600 miles. The primary drawback of the Hound Dog was its weight. At 20,000 lbs. each, the B52s could only carry two of them with a corresponding fifteen per cent loss of range.
The 1960s saw a change of doctrine for SAC. With the emergence of both land-based intercontinental ballistic missiles (ICBM), as well as sea-launched (SLBM) missiles from submarines, the manned bomber force became a leg of a nuclear triad. The primary advantage of the missile legs were their relative invulnerability to enemy attack. They were also cheaper to operate than a manned bomber fleet. Both ICBMs and SLBMs offered a quick response to an enemy attack, while a response from manned bombers was more time sensitive. The growing threat from Soviet ICBMs was another factor countering the effectiveness of the manned bomber leg. Due to the potential for conflict in Berlin, Cuba and a number of third world countries, the Kennedy Administration decided to scrap the policy of massive retalation, replacing it with the doctrine of flexible response. Instead of having a large nuclear umbrella with small conventional forces, those forces were increased in order to keep any potential war from escalating to the nuclear threshold. Under the flexible response doctrine, nuclear weapons were to be used in a limited role against selected targets. Thus, the B-52 had a new mission, to loiter on patrol at the edge of Soviet airspace, ready to strike designated targets in a retaliatory role. The Stratofortress was the ideal plane for the job, having the range, speed and payload, as well as an aerial refueling capability.
While the B-52 was designed as nuclear weapon delivery system, it served an entirely different purpose in Viet Nam. In 1964 seventy-four B-52s were modified with external bomb racks, which could carry an additional twenty-four 750 lb. bombs. The following year Operation Rolling Thunder began, in which the USAF commenced bombing missions in both North and South Viet Nam, with the primary role of the Stratofortress to support ground operations in the South. The first mission, Operation Arc Light was conducted by B-52s in June 1965, bombing a suspected Viet Cong stronghold in the Ben Cat District in South Viet Nam. Twenty-Seven B-52s participated in the raid, bombing a one mile by two mile box. Though only partially successful, the raid proved the potential of the B-52 as a ground attack weapon. Later that year, a number of B-52s underwent modifications to increase their capacity for carpet bombing. These raids were devastating to anyone in or near the target areas. B-52s bombed North Viet Nam in late 1972 during Operation Linebacker II. These missions were successful in leading to the peace talks which ended the war, although at a loss of 15 Stratofortresses. During that campaign, B-52 gunners claimed two North Vietnamese Mig-21s – the first hostile aircraft shot down by the plane.
The Stratofortress went on to provide ground support in Operation Desert Storm in 1991, Operation Allied Force in Serbia in 1999, Operation Enduring Freedom in Afghanistan in 2001, as well as Operation Iraqi Freedom in 2003. During its career, the B-52 has proven itself both a durable and an adaptable plane, receiving a number of modifications during its 63 year career. It has dropped bombs, launched missiles, served as an experimental platform, in addition to launching the X-15 rocket plane. Current efforts by Boeing to re-engine the Stratofortress are projected to extend its service life through 2040. One could say of the B-52, it’s the plane that keeps on flying.
During the last five years, the use of and uses for drones have increased exponentially. In this blog, we’ll trace the employment of drones in a number of industries.
While much of the current drone technology isn’t new, recent investments in both capital and technology have made drones a practical tool in a number of industries. The agricultural sector is one in which drone applications are on the rise. With the global population projected to reach about 9 billion by 2050 and agricultural consumption to increase by 70 per cent during the same period, the use of drones in agriculture has the potential of revolutionizing that sector of the economy. Such drones are high-tech systems which perform many tasks a farmer can’t, such as conducting soil scans, monitoring crop health, applying fertilizers and water, even tracking weather and estimating yields, as well as collecting and analyzing data. With the FAA currently streamlining regulations for agri-drone use, the market for such systems has the potential for approximately 80% of all drones produced, according to a recent study by Bank of America Merrill Lynch.
A number of construction companies are exploring the possibilities of utilizing drones or UAVs (Unmanned Aerial Vehicles) in that industry. Drones have a number of roles in the construction industry: among them are marketing, surveying, inspection, progress reporting, safety and monitoring workers at multiple sites. In the survey role, drones allow contractors to get detailed information about a job site, as well as conditions on surrounding properties. While site surveyors are necessary in some situations, drones can perform essentially the same function at a fraction of the cost. In the realm of construction inspection, drones offer a high degree of flexibility. For example, drones can effectively scan the roof of a skyscraper, revealing any possible construction faults. They are also useful at sites such as tunnels and bridges, which may be inaccessible from the surrounding land. The contractor can even use the drone to compare the construction to the actual plans of a project. Drone photography can be utilized to show aerial views of a site from different angles to determine feasibility of construction. These photos can be sent to a number of potential contractors during the bid process. The same capability is also useful to show job progress to developers, who may not be able to visit the site on a regular basis. Finally, drones provide a means of monitoring the safety of workers at multiple sites, keeping the contractor informed of any safety issues on a real time basis, requiring a fraction of the manpower and cost of on site supervisors.
Drones also have potential in the commercial sector. For example, Wal Mart is currently utilizing drones comparable to those used in agriculture to scan warehouse inventory, checking for missing or misplaced items. Drones flying through a warehouse are able to complete an inventory in a day – a task that would take an on site warehouse crew a month. Though in its early stages, a few major companies are using drones for delivery purposes. Dominos Pizza began a delivery service in Britain, in which a drone was able to deliver two pizzas per trip. This service has the obvious advantage of avoiding traffic jams. In Philadelphia, a dry cleaning service is using drones to make emergency deliveries of laundry to customers. Though weight restrictions are a problem, they are capable of flying a freshly cleaned suit to a customer’s front door. The latest evolution is party drones, which fly over an outdoor party, playing prerecorded music.
While drones haven’t been adopted on a mass scale, they have increased the functionality of a number of key industries, breaking through the traditional barriers. From quick deliveries, to monitoring construction progress to agriculture, drones increase work efficiency and productivity, improving customer service, safety and security – with little or no manpower. According to a recent Price Waterhouse Coopers study, drone related activity provides an economic boost of more than $127 billion globally. With the relaxed FAA flight rules approved in 2016, drone operators have more flexibility from which to operate. As it becomes cheaper to develop industry-specific drones, subsidiary niche markets will emerge. A recent study indicates the use of commercial drones could add $82 billion and 100,000 jobs to the national economy by 2025 – not bad for a young industry.
While the United States was a pioneer in aviation development during much of the twentieth century, many of its airports border on a state of decay. During the course of this blog, we’ll examine the current state of the nations airports, as well a number of proposed solutions.
Though many complain about airports, often as a result of troubled airline experiences, perhaps comparing major air hubs in the United States to their more modern overseas counterparts is unrealistic. Each airport has its own unique history in relation to the communities they serve. Aviation development in the US increased dramatically after World War II with airport construction complementing that effort. Many of the prime airports in the United States were conceived in an era before the proliferation of both foreign and domestic air routes. Most airport renovation efforts over the last 30 years have involved a limited patchwork process, since many of the hubs are surrounded by urban areas – unlike the modern air hubs of Asia and the Middle East, which serve emerging markets and emphasize architecture and aesthetics over serving large volumes of passengers. For example, Dubai’s main airport covers an area of about 7 million square feet, designed to serve from 25-30 million passengers per year, while the Jet Blue terminal in JFK airport serves approximately 22 million passengers per year in an area less than 1 million square feet. Post 9/11 security and related requirements have also placed additional stress on US airports. The financial and environmental costs of airport construction often make such proposals a political liability. The ownership and control of airports in the United States, a landlord-tenant model between the airlines and the municipalities, also serves to inhibit progress.
Given the constraints on space in many urban areas, airport designers are forced to move up rather than out. In a practical sense, any airport restructuring begins with the check-in process. By placing security and check-in on separate levels, traffic flow is segregated between the two functions. Such an organization divides passengers into two categories – those who are able to check in with the aid of mobile devices, and those who use the more traditional (paper) approach and may require assistance to board their flight. Both groups must pass through security before boarding . Such an arrangement could cut pre-flight processing time by as much as 40 per cent. As mobile technology becomes more dominant, it offers air carriers both the convenience and flexibility to book flights outside the confines of an airport. Satellite check-in sites at hotels, restaurants and shopping centers allow airlines the option of verifying and staging passengers from remote locations, requiring less staff and processing time. Delta airlines, for example , has set up its own security service at a major airport from which to process passengers. This concept provides both security and marketing benefits.
A recent trend in airport check-in procedures is the use of self-service technology. Miami International Airport purchased approx. 45 automated kiosks, to reduce customs and immigration processing time. These automated kiosks can process a passenger within two minutes, making what was once a grueling check-in process a relatively seamless one. Several major air hubs are embarking on outside improvements to enhance the passenger experience. For example, Chicago O’Hare began a $15 billion capital investment program in 2005, transforming the current system of intersecting runways to a series of parallel ones, which will increase capacity by 60 % while substantially reducing delays. An additional control tower, runway and cargo center are under construction at O’Hare and are slated to be operational in about three years. Los Angeles International began an $8.5 billion expansion program in 2006, with construction completed on the New Tom Bradley International terminal in 2013 with new dining, gates and retail areas designed to meet the needs of international tourists. Related projects include the updating of Terminal 6 to meet the needs of large scale aircraft, such as the Airbus A380. LAX is also building a new Central Utility Plant, as well as taxi and runway improvements.
Understanding how the above innovations affect terminal operations will be the key to the future success of the nations airports. As air traffic continues to grow despite economic and other setbacks, passengers will continue to demand more control over their travel experience. Airport planners must continue to emphasize key passenger services such as transit, parking and baggage claim to remain competitive, while focusing on the core mission of airports as gateways to the world.
After my grandson flew his newly purchased quadcopter a few weeks ago, I was stunned by the quality of video produced by its camera. During the course of this blog, we will trace the development of compact cameras, as well as their effect upon the radio control models.
The evolution of drone cameras began in 1901, when renown photographer George Lawrence conceived the idea of attaching a camera to a balloon to take photos of banquet halls and outdoor ceremonies. Lawrence developed a panoramic camera with a relatively slow shutter speed, which proved idea for area photographs. While his first balloon pictures were a success, both he and the balloon crashed, with Lawrence surviving a 200 ft. fall without injury. He then developed a camera platform using a series of kites connected by bamboo shafts to support the weight of the camera and ran a steel piano wire from the ground up to carry the electrical current that would trip the camera shutter. The photos were retrieved by parachute. This system was so successful, Lawrence used it to photograph San Francisco after the 1906 earthquake – from which he earned $ 15,000.
However, it wasn’t until the advent of digital technology in the 1970′s, which allowed photography to become more adaptable, that compact cameras became feasible. A digital camera is a hardware device, which takes pictures like a conventional camera, but stores the image to data instead of printing it to film. Most digital cameras are now capable of recording video in addition to taking photos. Perhaps the earliest precursor to digital photography occurred in 1957 in which Russell Kirsch, a pioneer of computer technology, developed an image scanning program utilizing a rotating drum to create images, the first scanned image a picture of Kirsch’s son. By 1969 a charge-coupled semiconductor was created by ATT Bell Labs, in which a semiconductor was capable of gathering data from photoelectric sensors, then transferring that charge to a second storage capacitor. Analog data could be transferred from a light sensitive chip, which could be converted into a digital grid, producing an image. In 1974 Bell Laboratories developed a charge transfer system, which could store and transfer charge carriers containing pixel data in serial order. This system was further refined by Bell in 1978, in which a charge transfer imaging device was produced using solid state technologies. This system was both more cost-effective, as well as preventing smearing aberrations created by similar image capture devices.
In 1973 Eastman Kodak took a gamble and hired Steve Sasson, a young electrical engineer. Sasson was one of a small cadre of electrical engineers employed by Kodak, a company well known for its chemical and mechanical engineering projects. Sasson was directed to capitalize on the capabilities of a charge-coupled device created by Fairchild Semiconductor, which could transmit and store images of 100 by 100 pixels. In 1975 Sasson completed a prototype camera incorporating a charge-coupled device, adapting a lens from an eight millimeter film camera, an analog -to-digital converter from a Motorola digital voltmeter, and a digital-data cassette recorder for storing image data. With this combination, Sasson and other Kodak technicians could capture an image and record it to a cassette in a mere 23 seconds.
By 1990 several companies began to enter the digital photography market, creating a new segment for consumer cameras. The first digital camera ready for sale in the US market was the Dycam Model 1, which came out the same year. The Model 1 was capable of recording images at a maximum resolution of 376 pixels by 240 pixels. Two developments in the 1990′s further enhanced the marketability of digital cameras. The first was a codec, utilized for image compression, the precursor to the JPEG image file format of today. This system exponentially increased the storage capacity of digital cameras over prior magnetic tape and floppy disc storage systems. By the mid 1990′s Apple began to market the Quick Take 100, the most widely marketed digital camera in the United States. The Quick Take had a maximum resolution of 640 by 480 pixels and could store up to 24 images in 24 bit color. In 1995 Casio released the QV-10, the first consumer digital camera to include a to include a liquid crystal display (LCD) screen, which quickly allowed camera owners to review newly photographed images. Other developments in the 1990′s included a pocketable imaging device with an LCD screen capable of displaying images from a camera storage device, as well as a single- lens digital reflex camera, which could reproduce camera images in 35mm film quality. By the end of the decade, digital cameras had a resolution of 2,000 pixels by 2,000 pixels.
In the early 2000′s a merging of digital camera and lithium polymer battery technologies took place. In the latter case, the flexible polymer battery began to deliver near gas engine performance with the attributes of less weight and volume on the rc model frame. Digital cameras were now both lightweight and efficient, capable of both still photos and video covering a relatively wide area. By 2010 a number of drones and smaller quadcopters carried flash drive units, which could be inserted into the rc model camera to record flight video. Once on the ground, the rc pilot would then insert the drive unit into the USB connection of a personal computer, playing the video of the quadcopter flight on the computer monitor screen – a far cry from pulling piano wire to trip a camera shutter.
From the journeys of the Apostle Paul to the twenty first century, missionaries have been on the move, proclaiming the gospel as well as meeting the physical needs of the communities they serve. During the course of this blog, we will trace the development of mission aviation from its earliest days to its global reach of today.
While missionaries were flown into Central America and the Caribbean region as early as the 1920′s, it wasn’t until after World War II that mission aviation developed into its own unique ministry. One of the the first air ministry organizations was the Mission Aviation Fellowship. The MAF was formed in 1946 as a result of several World War II aviators who envisioned a role for aviation in spreading the gospel. The Mission Aviation Fellowship was initially established from three branches, with Jim Truxton of the United States, Murray Kendon of the the United Kingdom and Edwin Hartwig of Australia. The earliest MAF efforts were in Mexico, Peru and Ecuador with Betty Greene flying two Wycliffe Bible translators to a remote location in Mexico in 1946. By 2010 the MAF supported missionaries in 55 countries, transporting over 200,000 passengers, meeting global mission and humanitarian needs with 130 aircraft.
As a result of the increased global outreach of the Missionary Aviation Fellowship and other aviation ministries, a need for pilot training programs became evident. In 1975 the Mission Aviation Training Institute (MATI) was formed. Upon retiring from the Air Force, Davis Goodman was approached by the President of Piedmont Bible College to establish a flight training program for missionaries under development by the college. Flight training began the prior year, with a single instructor, a borrowed aircraft and nine students at a local airport. Later in 1975, Davis became the program director and purchased a Cessna 150 dedicated for training purposes. Within four years, the program leased space at a larger airport, followed by the addition of an Airframe and Powerplant Mechanic School in 1981. In 1984 Goodman ceded both ownership and operational control of Sugar Valley Airport and MATI (now Missionary Aviation Institute) to Piedmont Baptist College. With more pilots than planes for mission efforts, Goodman founded Aviation Ministries International (AMI) in 1984 with the primary tasks of fundraising and aircraft acquisition. By 2015 AMI (now Missionary Air Group) was providing both mission and medical services to outlying areas in more than a dozen countries.
With the steady growth and progress of mission aviation over the past seventy years, as well as improvement in transport systems in underdeveloped areas, some have questioned if mission aviation is relevant. However, when one considers the perspective of a pilot, a different picture arises. While the major cities of the world are easily accessible by jetliner, reaching remote local areas remains a problem. Transportation is not uniform within many of these countries with highways turning into back roads within a fifty mile radius of urban areas. A journey of a few hours by plane could take a day on foot. Secondly, roads are actually disappearing in some of the remote areas of the world. For example, in a number of African countries, when one could travel across the country in a couple of days, is nearly impassable today with bridges and roads in disrepair being replaced by jungle growth due to political instability and inadequate funding. Also, in many instances air transport remains a cost-effective means of travel. A mission organization in Brazil chartered a motorized canoe for a trip up the Amazon river only to find out they could have chartered a Cessna 206 float plane for an identical rate. National aviation organizations now exist fully staffed and funded by local mission groups. The Asas de Socorro in Brazil manages five bases along the Amazon in addition to operating a flight school in Anapolis, training students from other Latin-American countries. Finally, mission aviation remains the most flexible and responsive tool to reach otherwise impassable areas. In Morocco, where mission work has thrived for years along its populated coastal cities, the Berber tribesmen of the Atlas Mountains remain without a church due to the ruggedness of the terrain and relative isolation.
While watching the recent movie Sully, I was amazed at the sophistication of current flight simulators available to the major aircraft producers. During the course of this blog, we will trace the development of flight simulators from mere mechanical devices to the virtual reality electronics of today.
A flight simulator is a mechanical or electronic device, which attempts to duplicate both aircraft flight and the environment in which it flies. Current simulators can replicate factors such as flight controls, wind, moisture and electronic system interaction. While flight simulation is used primarily for pilot training, it may also be used to design aircraft, as well as identify effects of aircraft properties.
The earliest flight simulators were used during World War I to teach gunnery techniques. This involved a static simulator with a model aircraft passing in front to aid both pilots and gunners to develop correct lead angles to the target. This was the only form of flight simulation for nearly ten years. The Link Trainer, developed by Edwin Link in the late 1920′s, capitalized on the use of pneumatic devices from player pianos and organs from the family musical instrument business. The first trainer was patented in 1930 with an electrical suction pump boosting the various control valves operated by stick and rudder action while another motor simulated the effects of wind and other external disturbances. These actions could be manually adjusted to provide a variety of flight characteristics.
While the Link Trainer provided a quantum leap in capability over previous flight simulators, many in both the military and civil aviation communities believed the live flight experience offered a better training environment. However, by the early 1930′s, the United States Army Air Corps had a need for flight simulator applications which could train mail pilots to fly by instruments for long distances. An enhancement to the Link Trainer was a device called the course plotter, in which a self-propelled tracker could remotely trace the trainer position from an inked wheel with communications between pilot and instructor facilitated by the use of simulated radio beacons.
It was during the late 1930′s, when flight simulation began to be based on electronic applications. The Dehmel Trainer, developed by Dr. R. C. Dehmel of Southwestern Bell, coupled a Link Trainer with an advanced radio simulation system, which could accurately duplicate navigation signals transmitted to a receiving aircraft, providing a state of art simulation of radio navigation aids. The Aerostructor, developed by A. E. Travis, utilized a fixed base trainer with a moving visual presentation, as opposed to radio and electronic signals. This presentation was based on a loop of film which depicted the effects of course changes, pitch and roll. While the Aerostructor was never mass produced, a modified version of it was in service with the US Navy.
During World War II advances in aircraft design such as retractable landing gear, variable pitch propellers and higher speeds created a demand for more realistic forms of flight simulation. In response to this, the Hawarden Trainer was developed, which used a cutaway center section of a Spitfire fuselage, which allowed training in all aspects of operational flight. In 1939, the British were in need of a simulator which could train it’s navigators who were ferrying US aircraft across the Atlantic. The navigator was supported by a number of radio aids, as well as a celestial dome corresponding to changes in the position of the stars relative to changes in time, longitude and latitude. The Celestial Trainer, designed by Ed Link and P. Weems was also modified to train bomber crews, in which simulated landscapes gave the bomb aimer target sightings as they would appear from a moving aircraft. Redifussion (Redifon) produced a navigation device in 1940, which simulated existing radio direction equipment allowing two stations to take a fix on an aircraft’s position. By the end of the war, aircraft crews were trained by the simulation of radar signals to acquaint them with new types of radar developed during the war.
While the science of flight simulation had progressed dramatically over the past thirty years, they were unable to accurately duplicate performance characteristics of a plane. This changed with the arrival of subsonic jetliners in the 1950′s. Aircraft manufacturers began to produce more complete data and extensive flight testing. This data was stored on analogue computers, making the data transferable, but requiring more hardware as aircraft testing became more sophisticated. By the early 1960′s, digital computers began to replace the aging analogue units due to the increased data capacity and speed of the digital units. The most successful of these, the Link Mark I, operated with three parallel processors functional, arithmetic and radio selection, using a drum memory for data storage. By the 1970′s the majority of computer systems could be adapted for flight simulation.
During that decade computer image generation or CGI technology became available for flight simulation models. This technology, adapted from the space program, used a ground plane image, supplemented by three dimensional graphics. This technology became more sophisticated in recent years, mating it to advances in digital computers – a far cry from the rolling ground plane pictures of the 1940′s. Today, flight simulation is a colossal industry, spanning the globe with a wide range of high tech applications for both aircraft users and producers, enhancing the safety of both crew and passengers.
When one considers prominent German-Americans, names such as Eisenhower, Nimitz, Kaiser and Kissinger come to mind. However, another German-American, not often cited, may leave perhaps a greater legacy.
William E. Boeing was born in Detroit, Michigan in 1881 to Wilhelm Boing from Hagen-Hohenlimburg Germany and Marie M. Ortmann from Vienna, Austria. The senior Boeing was a mining engineer, who became wealthy as a result of holdings of timber lands and mineral rights near Lake Superior. After study abroad in Switzerland, Boing added an e to his name, to make it sound more Anglo. He then entered Yale, but left before graduating to join the family timber business in 1903. Buying a large tract of forest on the Pacific side of the Olympia Peninsula in Washington, Boeing began building boats as well as acquiring several lumber operations.
During a business trip to Seattle in 1909, Boeing saw his first plane and soon developed a keen interest in aviation. Within a few months, Boeing was taking flying lessons at the Glenn L. Martin Plant in Los Angeles and had ordered a Martin TA Hydoraeroplane. Martin even sent one of his test pilots up to Seattle to give Boeing lessons on site. When the test pilot crashed the aircraft during a test flight, he informed Boeing replacement parts would not be available for months. The problem frustrated Boeing, who had just received his pilot’s certificate. After studying both the plane and the parts distribution at Martin, Boeing approached a friend of his, Commander George Conrad Westervelt, USN. When Boeing suggested to Westervelt that they could build their own plane in less time, Westervelt agreed and they formed their own aircraft company – B&W. Their first aircraft, the B&W seaplane was an instant success with Boeing purchasing an old boat factory on the Duwamish River outside Seattle.
When the United States entered World War I, Boeing and Westervelt received a government contract for fifty of the B&W seaplanes, with Boeing changing the name of fledgling company to Pacific Aero Products Company. By the end of the war, Boeing began to emphasize commercial aircraft, in addition to providing a government sponsored air mail service.
The air mail service was a result of the commercial aviation market flooded with surplus World War I aircraft, which were relatively inexpensive compared with the cost of new models. Boeing had to diversify at this point, selling furniture, and a series of flat-bottomed boats called sea sleds. Within a few years, Boeing began to realize a profit from the overhaul of government aircraft and the sale of a few new models. During the 1920s and early 1930s, Boeing would become a major producer of fighter planes for the Army Air Corps.
In 1925 federal law allowed public bid for air mail contracts. Boeing received the contract, but needed a fleet of twenty six planes to serve the Chicago to San Francisco route by July 1, 1927. As a guarantee, Boeing drew $500,000 of his own money to serve as a bond for the effort. These aircraft were composed of Boeing’s latest design, the Model 40, which had an open cockpit for the pilot with an enclosed cabin for two additional passengers. The mail service proved to be an unexpected market coup for Boeing, allowing him to haul passengers for a fee and start a new airline, Boeing Air Transport. It wasn’t long before Boeing cornered the market in both aviation sectors.
In 1929 Boeing acquired Pacific Air Transport, merging it with both the Boeing Airplane Co. and Boeing Air Transport. The new company was named United Aircraft And Transport Company. Later the same year, United purchased both the Pratt&Whitney engine and Hamilton Standard Propeller companies, as well as Chance Vaught Aircraft. To expand its airline service, Boeing acquired National Air Transport the following year.
By 1934 Boeing’s success began to draw the attention of the federal government. In June of that year the Air Mail Act was passed by Congress, by which aircraft manufacturers had to divest themselves of any airline services. As a result of this split, Boeing’s holdings were formed into three companies: United Aircraft Corporation, which manufactured aircraft in the eastern United States (now United Technologies Company), Boeing Airplane Company, manufacturing aircraft in the western United States and United Airlines, which served the air routes.
A week after the Air Mail Act was passed Boeing resigned as chairman and sold his stock in the firm. However, shortly after his resignation, William Boeing received the coveted Daniel Guggenheim Medal for achievement in the field of aviation. During World War II, he came out of retirement to act as an advisor to the company to meet the demands of combat aircraft development. The company he started in 1916 went on to develop such influential aircraft as the B-17 Flying Fortress, B-29 Superfortress, B-47 Stratojet and B-52 Stratofortress. Boeing produced an equally impressive series of airliners, starting with the Stratoliner in 1939, the world’s pressurized airliner, the jet powered 707, 727, 737, and the Boeing 747, the world’s first Jumbo Jet. A recent first for Boeing was the successful development and production of the 787 Dreamliner, the first jetliner in service made of carbon-fiber materials. Boeing is now involved in the space technology sector, in addition to the production of aircraft. Not bad for someone who made the decision to build his own plane in 1916.
This article is the last of a series about the heroes of aviation.