Channels lising page

All videos archived of New Mind
The Ingenious Simplicity Of O-Rings

ug1YVrBvJZQ | 14 Nov 2024

The Ingenious Simplicity Of O-Rings

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription This video explores the fascinating world of O-rings - simple yet critical components that have shaped modern engineering and safety standards. From their invention by Danish machinist Niels Christensen to their tragic role in the Challenger disaster, we uncover how these humble seals have become fundamental to countless technologies we rely on daily. Key topics covered: 1. The evolution of O-ring design from the 1800s to modern day 2. How O-rings work: The physics of viscous fluid sealing 3. Different types of sealing applications: Static vs Dynamic 4. Material science: From natural rubber to advanced synthetics 5. Critical design considerations and failure modes 6. The Challenger disaster: A case study in engineering safety Learn how these seemingly simple components tackle complex challenges: - Sealing pressures from vacuum to thousands of PSI - Operating in extreme temperatures (-175°F to 600°F) - Managing dynamic motion in rotating and reciprocating systems - Maintaining reliability in critical safety applications Discover the intricate engineering behind O-ring systems: - Gland design and surface finish requirements - Material selection and chemical compatibility - Installation considerations and lubrication - Back-up devices and extrusion prevention We'll explore how modern engineering continues to advance O-ring technology: - Advanced materials like Perfluoroelastomers (FFKM) - Specialized coatings and surface treatments - Applications in emerging technologies - Lessons learned from historical failures SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind #Engineering #MaterialScience #SpaceHistory #MechanicalEngineering #EngineeringFailures #TechnologyHistory #Seals #IndustrialDesign #SafetyEngineering #Innovation

The Ingenious Simplicity Of O-Rings

44RHhW7HyPg | 26 Oct 2024

The Ingenious Simplicity Of O-Rings

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription This video explores the fascinating world of O-rings - simple yet critical components that have shaped modern engineering and safety standards. From their invention by Danish machinist Niels Christensen to their tragic role in the Challenger disaster, we uncover how these humble seals have become fundamental to countless technologies we rely on daily. Key topics covered: 1. The evolution of O-ring design from the 1800s to modern day 2. How O-rings work: The physics of viscous fluid sealing 3. Different types of sealing applications: Static vs Dynamic 4. Material science: From natural rubber to advanced synthetics 5. Critical design considerations and failure modes 6. The Challenger disaster: A case study in engineering safety Learn how these seemingly simple components tackle complex challenges: - Sealing pressures from vacuum to thousands of PSI - Operating in extreme temperatures (-175°F to 600°F) - Managing dynamic motion in rotating and reciprocating systems - Maintaining reliability in critical safety applications Discover the intricate engineering behind O-ring systems: - Gland design and surface finish requirements - Material selection and chemical compatibility - Installation considerations and lubrication - Back-up devices and extrusion prevention We'll explore how modern engineering continues to advance O-ring technology: - Advanced materials like Perfluoroelastomers (FFKM) - Specialized coatings and surface treatments - Applications in emerging technologies - Lessons learned from historical failures SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind #Engineering #MaterialScience #SpaceHistory #MechanicalEngineering #EngineeringFailures #TechnologyHistory #Seals #IndustrialDesign #SafetyEngineering #Innovation

The Hose Clamp Story

1RJunnhpFIM | 28 Sep 2024

The Hose Clamp Story

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription From ancient Greece to modern automobiles, the humble hose clamp has played a crucial role in engineering and industry. The hose clamp's journey reflects the broader evolution of engineering and manufacturing. From simple hammered rings to complex plastic fittings, this essential component has adapted to meet the changing needs of industry. While modern automotive design may be moving away from traditional metal clamps, the ingenuity behind their development continues to influence engineering solutions across various fields. Ancient Beginnings: - 500 BC: Greeks use hammered copper rings to secure oxhide and ox intestine hoses for water transport and firefighting - Chinese create bamboo pipes for natural gas, sealed with plant fibers - 1st century AD: Soft metal rings, leather straps, and plant fiber wrappings become common for sealing various fluids Industrial Revolution and Beyond: - 1821: First rubber-lined, cotton-webbed "gum hose" patented - 1844: Charles Goodyear patents vulcanization, revolutionizing rubber products - Early clamping methods: Threaded brass couplers, riveting, wire binding - 1880s: Introduction of the "hosebinder" or Cotter Type Hose Clamp, the first modern hose clamp The Birth of the Worm Drive Clamp: - 1896: Knut Edwin Bergström patents the worm-drive clamp (Jubilee Clip) - Simple, adaptable design allows for higher clamping forces and easy adjustment - Bergström founds ABA to manufacture his invention, creating a new product category Specialized Clamps for New Industries: - Early 20th century: Aviation and automobile industries drive development of new clamp variants - Type B and D clamps: Flat band body with machine screw and embedded square nut for higher pressure applications - Type C (tower) clamps: Bridge structure for limited access areas World War II and Cost Reduction: - Type A (wire) clamps: Formed wire design reduces material and manufacturing costs - Lightweight but prone to hose damage and improper installation High-Performance Clamps: - 1940s: V-band (Marman) clamps developed for aerospace applications - Used to secure atomic bombs in B-29 bombers - Provide quick assembly/disassembly and uniform sealing under extreme conditions - Later adopted in automotive, food, and pharmaceutical industries T-Bolt Clamps: - 1950s: T-bolt mechanism adapted for general-purpose hose clamps - Popularized in the 1970s by Breeze Industrial Products Corporation - Offers up to four times the clamping force of traditional clamps Spring Hose Clamps: - Post-WWII: Automotive industry seeks further cost reduction - Type E clamps provide constant tension and self-adjustment - Ideal for engine coolant and vacuum connections Ear Clamps: - 1951: Hans Oetiker invents the single-ear clamp - Quick installation, consistent tension, and low profile - Suitable for high-pressure applications and tamper detection - Variations include two-ear and stepless ear clamps The Plastic Revolution: - 1990s: Automotive industry shifts towards plastic fittings - Driven by cost savings, design flexibility, and assembly optimization - High-performance thermoplastics used for strength and temperature resistance - Integrated features like sensors and quick-connect mechanisms - Concerns about long-term durability and environmental impact SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind #EngineeringHistory #HoseClamps #IndustrialRevolution #AutomotiveTechnology #AerospaceEngineering

The Clever Engineering Of Piston Rings

EFfyWbi3APk | 31 Aug 2024

The Clever Engineering Of Piston Rings

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription This video explores the evolution and engineering marvels of these small but crucial components that have shaped the history of motorized transportation. We begin our journey in the mid-1800s with the first internal combustion engines, tracing the development of piston rings from their humble origins in steam engines to the high-tech marvels used in modern Formula 1 racing. Whether you're a car enthusiast, an engineering student, or simply curious about how things work, this deep dive into piston ring technology will give you a new appreciation for the clever engineering behind these small but critical components that have been quietly revolutionizing engines for over a century. Key topics covered: 1. Early piston designs and sealing challenges 2. The invention of the split-ring seal by John Ramsbottom 3. Evolution of piston ring profiles: Rectangular, taper, bevel, and Napier rings 4. The introduction and importance of oil control rings 5. Materials science: From cast iron to advanced steel alloys 6. Cutting-edge coatings: Chrome, molybdenum, and diamond 7. The extreme engineering of Formula 1 piston rings Learn how these seemingly simple components tackle complex challenges: - Sealing combustion gases at pressures exceeding 300 bar - Managing temperatures that would melt conventional materials - Reducing friction to improve engine efficiency - Controlling oil consumption and emissions Discover the intricate balance of properties that make piston rings work: - Tangential tension and specific contact pressure - Radial pressure distribution and gas pressure metering - The critical role of ring profiles in gas sealing and oil control We'll explore how modern engineering pushes the limits of piston ring design: - Ultra-thin rings measuring just 0.5-0.7mm in Formula 1 engines - Advanced coatings like Physical Vapor Deposition (PVD) and chrome-ceramics - The trade-offs between performance, efficiency, and longevity in consumer vehicles SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind #PistonRings #AutomotiveEngineering #InternalCombustionEngines #Formula1Technology #MaterialsScience #EngineEfficiency #AutomotiveHistory #MechanicalEngineering #EngineeringInnovation #CarTechnology

The Evolution Of Nuclear Weapon Locks

F1LPmAF2eNA | 27 Jul 2024

The Evolution Of Nuclear Weapon Locks

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription In the early days of nuclear weapons, security measures were surprisingly basic. This video explores the fascinating history and development of Permissive Action Links (PALs), the sophisticated systems that now safeguard nuclear arsenals. Timeline: 1940s-1950s: Primitive security measures - Weapons kept partially disassembled - No formal policy on custody, control, and proliferation - Simple 3-digit combination locks introduced 1953: Missiles and Rockets agreements - Defined roles of Atomic Energy Commission (AEC) and Department of Defense (DoD) 1961: Committees formed to study use control - Special Warhead Arming Control (SWAC) Committee - Safety Steering Group - Joint Command and Control Study Group Project 106 August 1961: Secretary of Defense requests AEC to create permissive links for NATO weapons June 1962: President Kennedy issues NSAM 160 - Mandates PALs on all U.S. nuclear weapons in NATO countries How PALs Work: 1. Isolation: Critical components enclosed in "exclusion region" 2. Incompatibility: Designed to prevent accidental activation 3. Inoperability: "Weak links" render weapon inoperable in extreme conditions Key Components: - Stronglinks: Rugged electromechanical devices controlling weapon arming - Energy control elements: Create pathways into exclusion region PAL Categories: Category A (1960s): - MC1541 coded switch (5-digit code) - Complex operation, took 30 seconds to 2.5 minutes - Required multiple support equipment pieces Category B (mid-1960s): - MC1707 coded switch (4-digit code) - Faster operation, cockpit control possible - Fewer wires, parallel unlocking for multiple weapons Category C (mid-1970s): - Extended Cat B capabilities - 6-digit code - Introduced limited code attempt lockouts Category D (1975): - First microprocessor-based PAL (MC2764) - Multiple Code Coded-Switch (MCCS) concept - 6-digit codes for various functions (arm, train, disable) - Interfaced with MC2969 Intent stronglink - Anti-intrusion sensors, some self-powered Category F (mid-1980s): - 12-digit code system - Advanced features: code-driven disable modes, emergency stops - Variable yield adjustment via code - Encryption in the arming process Key Developments: 1980s: Modernization efforts - Second-generation stronglinks: detonator and dual magnetic - Improved reliability and reduced manufacturing costs 1997: PALs installed on all U.S. nuclear devices - U.S. Navy last to receive them 2001: PAL Code Management System (CMS) deployed - End-to-end encrypted method for re-coding weapons - MC4519 MCCS Encryption Translator Assembly 2004: CMS fully implemented across all PAL systems Future Developments: - Ongoing miniaturization and ruggedization - Micromachining technologies for mm-sized components SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind #NuclearSecurity #PermissiveActionLinks #MilitaryHistory #DefenseTechnology #NuclearWeapons #ColdWar

The Science Of Cutting

NmHyfI_sgz8 | 22 Jun 2024

The Science Of Cutting

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription This exploration of cutting technology spans from prehistoric stone tools to modern computer-controlled machine tools, tracing how this fundamental concept has shaped human civilization and continues to evolve today. The story begins in prehistoric times, with the first evidence of sharp tools dating back 2.6 million years. Early hominids used crude stone "choppers" to cut meat and work with wood, empowering them to create more advanced implements. The science of cutting involves separating materials through highly directed force, with the cutting tool needing to be harder than the material being cut. The Bronze Age marked a revolution in cutting technology, as humans transitioned from stone to metal tools around 6000 BC. Copper's low melting point made it ideal for early metalworking, and the discovery of bronze alloys created harder, more durable cutting tools. This period also saw the rise of metallurgy, the study of metals' physical and chemical properties. Crystal lattice structure, dislocations, and grain boundaries are key concepts in understanding metal behavior. Techniques like alloying, heat treatment, and work-hardening improve metal properties for specific applications. The Iron Age brought further advancements with improved furnace technology enabling iron smelting. Bloomeries produced workable iron by hot-forging below melting point, while blast furnaces increased production, creating cast iron for structural use. Puddling furnaces later allowed the production of wrought iron with lower carbon content. The dawn of the Steel Age marked a turning point in cutting technology. Steel combined iron's strength with improved workability, and innovations like the Bessemer process and Open Hearth method made steel production more efficient and affordable. This led to the rise of industrial giants like US Steel, the world's first billion-dollar corporation. Machine tools evolved from early developments like the bow lathe and water-powered boring mill to Maudslay's revolutionary screw-cutting lathe in 1800. Eli Whitney's milling machine in 1820 enabled mass production, and by 1875, the core set of modern machine tools was established. The mid-20th century saw the introduction of numerical control (NC) for automation, followed by computer numerical control (CNC) machines in the 1970s. Advancements in cutting tool materials played a crucial role in this evolution. High-speed steel, introduced in 1910, addressed the limitations of carbon steel by maintaining hardness at higher temperatures. Carbide tools, developed from Henri Moissan's 1893 tungsten carbide discovery, combined extreme hardness with improved toughness. The manufacturing process of cemented carbides impacted tooling design, including the development of replaceable cutting inserts. Exotic materials like ceramics and diamonds found use in specific high-speed applications and abrasive machining. Looking to the future, emerging non-mechanical methods like laser cutting and electrical discharge machining challenge traditional techniques. Additive manufacturing (3D printing) poses a further challenge to traditional subtractive processes. Despite these new technologies, mechanical cutting remains dominant due to its versatility and efficiency, with increasing automation and integration keeping it relevant in modern manufacturing. From the first stone tools to today's computer-controlled machines, cutting has shaped the world in countless ways. As humanity looks to the future, the principles of cutting continue to evolve, adapting to new materials and manufacturing challenges. This journey through cutting technology offers insights into a fundamental process that has driven human progress for millennia, appealing to those interested in history, engineering, and the intricacies of how things are made. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Next Generation Of Brain Mimicking AI

ythnIwpQCgQ | 25 May 2024

The Next Generation Of Brain Mimicking AI

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription The tech industry's obsession with AI is hitting a major limitation - power consumption. Training and using AI models is proving to be extremely energy intensive. A single GPT-4 request consumes as much energy as charging 60 iPhones, 1000x more than a traditional Google search. By 2027, global AI processing could consume as much energy as the entire country of Sweden. In contrast, the human brain is far more efficient, with 17 hours of intense thought using the same energy as one GPT-4 request. This has spurred a race to develop AI that more closely mimics biological neural systems. The high power usage stems from how artificial neural networks (ANNs) are structured with input, hidden, and output layers of interconnected nodes. Information flows forward through the network, which is trained using backpropagation to adjust weights and biases to minimize output errors. ANNs require massive computation, with the GPT-3 language model having 175 billion parameters. Training GPT-3 consumed 220 MWh of energy. To improve efficiency, research is shifting to spiking neural networks (SNNs) that communicate through discrete spikes like biological neurons. SNNs only generate spikes when needed, greatly reducing energy use compared to ANNs constantly recalculating. SNN neurons have membrane potentials that trigger spikes when a threshold is exceeded, with refractory periods between spikes. This allows SNNs to produce dynamic, event-driven outputs. However, SNNs are difficult to train with standard ANN methods. SNNs perform poorly on traditional computer architectures. Instead, neuromorphic computing devices are being developed that recreate biological neuron properties in hardware. These use analog processing in components like memristors and spintronic devices to achieve neuron-like behavior with low power. Early neuromorphic chips from IBM and Intel have supported millions of simulated neurons with 50-100x better energy efficiency than GPUs. As of 2024, no commercially available analog AI chips exist, but a hybrid analog-digital future for ultra-efficient AI hardware seems imminent. This could enable revolutionary advances in fields like robotics and autonomous systems in the coming years. VISUALIZATIONS Denis Dmitriev - https://www.youtube.com/@DenisDmitrievDeepRobotics Jay Alammar - https://youtube.com/@arp_ai Ivan Dimkovic - https://youtube.com/@321psyq Carson Scott - https://youtube.com/@carsonscott260 SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Spark Plug Story

Tdsv4rBEPmo | 20 Apr 2024

The Spark Plug Story

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription The spark plug, a crucial component in gasoline internal combustion engines, has a rich history dating back to 1859 when Belgian engineer Jean Joseph Étienne Lenoir first used it in his coal gas and air engine. The design was refined by inventors like Nikola Tesla, Frederick Richard Simms, and Robert Bosch, with Bosch being the first to develop a commercially viable spark plug. Spark plugs ignite the air-fuel mixture in the engine's combustion chamber by creating a spark between two electrodes separated by an insulator. The spark ionizes the gases in the gap, causing a rapid surge of electron flow that ignites the mixture, creating a controlled combustion event. Early spark plugs used mineral insulators and had short lifespans. The introduction of sintered alumina in the 1930s improved insulation, strength, and thermal properties, allowing higher voltages and better self-cleaning capabilities. In the 1970s, lead-free gasoline and stricter emissions regulations prompted further redesigns, including the use of copper core electrodes to improve self-cleaning and prevent pre-ignition. Multiple ground electrode plugs and surface-discharging spark plugs were explored in the following decades. The 1990s saw the introduction of coil-on-plug ignition systems and noble metal high-temperature electrodes, enabling higher voltages, stronger sparks, and longer service life. Modern spark plugs also incorporate ionic-sensing technology, which allows the engine control unit to detect detonation, misfires, and optimize fuel trim and ignition timing for each cylinder. This level of control has pushed engine designs to be more efficient and powerful. As electric vehicles become more prevalent, the spark plug's evolution may soon reach its end, with electricity both pioneering the emergence and likely ushering in the end of the internal combustion engine. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Ingenious Mechanics of Driveshafts

Vxfay5Y9lzY | 23 Mar 2024

The Ingenious Mechanics of Driveshafts

▶️ Visit https://brilliant.org/NewMind to get a 30-day free trial + 20% off your annual subscription The evolution of automotive drivelines began centuries ago with horse-drawn implements, such as the Watkins and Bryson mowing machine, which introduced the first modern conceptualization of a driveshaft in 1861. Early automobiles primarily used chain drives, but by the turn of the century, gear-driven systems became more prevalent. The 1901 Autocar, designed by Louis S. Clarke, was considered the first shaft-driven automobile in the U.S., featuring a rear-end layout with a sliding-gear transmission, torque tube, and bevel gear assembly with an integrated differential. Autocar used a "pot type" universal joint, which was later superseded by the more robust Cardan universal joint, first used in the 1902 Spyker 60 HP race car. Cardan universal joints, named after the Italian mathematician Gerolamo Cardano, consisted of two yokes connected by a cross-shaped intermediate journal, allowing power transmission between shafts at an angle. These joints used bronze bushings and later needle roller bearings to reduce friction and increase durability. Slip yokes were incorporated into the driveline assembly to accommodate axial movement. However, Cardan joints had limitations, such as non-uniform rotational speeds and increased friction at higher angles. Throughout the 1920s, several design variations were developed to address these limitations. Ball and trunnion universal joints, like those used in the 1928 Chrysler DeSoto, allowed for greater angle misalignment and integrated slip characteristics. Double Cardan shafts, which used two universal joints connected by an intermediate propeller shaft, became a popular choice for rear-wheel drive vehicles due to their design flexibility, manufacturability, and torque capacity. Constant velocity (CV) joints were introduced in the late 1920s to address the limitations of Cardan joints in front-wheel drive vehicles. The Tracta joint, invented by Jean-Albert Grégoire, was one of the first CV joints used in production vehicles. However, the most practical and popular design was the Rzeppa joint, invented by Ford engineer Alfred H. Rzeppa in 1926. Rzeppa joints used ball bearings to provide smooth power transfer at high angles. Tripod joints, developed in the 1960s, were commonly used on the inboard side of front-wheel drive half-shafts due to their affordability and ability to accommodate axial movement. During the 1960s, manufacturers began experimenting with CV joints on propeller shafts for rear-wheel drive cars to achieve smoother power transfer. Double Cardan joints, which placed two Cardan joints back-to-back in a single unit, were also developed for use in high-articulation, high-torque applications. Until the 1980s, drive shafts were primarily made from steel alloys. In 1985, the first composite drive shafts were introduced by Spicer U-Joint Division of Dana Corporation and GM. Composite drive shafts, made from carbon fiber or glass fiber in a polymer matrix, offered significant weight savings, high strength-to-weight ratios, and inherent damping properties. As the automotive industry looks towards a future with alternative power sources, driveline components and universal joints remain crucial elements. Despite attempts to eliminate drivelines using hub electric motors, the traditional drivetrain layout is likely to remain dominant in the near future. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Fascinating Evolution of Automotive Wiring

TOMH_DN33q4 | 24 Feb 2024

The Fascinating Evolution of Automotive Wiring

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription The fascinating evolution of automotive electrical systems traces back to the first mass-produced electrical system in the Ford Model T. Over its 19-year production, the Model T's electrical setup evolved from a simple magneto-powered ignition to incorporating elements found in modern vehicles. The narrative unfolds the transition from cloth-covered wires to advanced multipin and modular connectors, highlighting the technological leaps in automotive wiring. In the early days, vehicles like the Ford Model T relied on cloth-covered, stranded copper wires, offering flexibility but limited durability. Early wiring faced challenges like moisture absorption and vulnerability to abrasion, leading to unreliable electrical systems. The introduction of rubber-covered wires presented a solution, albeit with its own set of drawbacks, such as brittleness over time. The 1930s marked a significant shift with the introduction of bullet and spade terminals, eliminating the need for fasteners and allowing for more secure connections in tight spaces. This period also saw the advent of crimping, a method that enhanced connection reliability by avoiding soldering defects and improving resistance to vibration. As vehicles became more complex, the need for efficient and reliable connectors grew. The aviation industry's adoption of circular connectors in the 1930s paved the way for similar advancements in automotive wiring. These connectors, characterized by their ruggedness and ease of use, set the stage for the standardization of components, ensuring reliability across various applications. The introduction of synthetic polymers like PVC in the 1920s and 1930s revolutionized wire insulation, offering superior resistance to environmental factors. However, the evolving demands of automotive systems called for even more durable materials, leading to the adoption of advanced insulation materials in high-stress applications. The 1950s saw vehicles integrating more amenities, necessitating the development of less costly, plastic-based multipin connectors. This period also marked the beginning of the transition towards electronic management systems in vehicles, significantly increasing wiring complexity. By the 1980s, the need to transmit digital and analog signals efficiently led to the adoption of materials with low dielectric constants, minimizing signal loss. The era also welcomed the Controller Area Network (CAN) bus protocol, a robust communication system that allowed multiple electronic devices to communicate over a single channel. The 1990s and beyond have seen vehicles adopting mixed network systems to cater to varied subsystem requirements, from critical controls to infotainment. The advent of advanced driver assistance systems (ADAS) and the shift towards electric vehicles (EVs) have introduced new challenges and standards in automotive wiring, emphasizing safety and efficiency in high-voltage environments. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Zip Tie Story

Z0kp7up823k | 27 Jan 2024

The Zip Tie Story

🔰 Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription In this captivating journey through history, we explore the evolution of cable management and the birth of cable ties, a seemingly simple yet revolutionary invention. The narrative begins in the late 19th century when electrical advancements were transforming New York City. Enter Robert M. Thomas and Hobart D. Betts, Princeton University students turned entrepreneurs, who paved the way for the future of electrical infrastructure. Fast forward to the 1950s, where Maurus C. Logan, a Scottish immigrant working with Thomas and Betts, witnessed the intricate process of cable lacing in Boeing aircraft manufacturing. Cable lacing, a century-old technique, involved using waxed linen cords to neatly secure cable bundles, primarily in telecommunications. Logan, determined to simplify this labor-intensive process, spent two years developing what would become the modern cable tie. Logan's breakthrough came in 1958 with a patent submission for a nylon strap with an integrated oval aperture, designed to loop around cables and secure itself through friction. Despite initial indecisiveness on the latching mechanism, Logan's design marked the birth of the cable tie. Thomas and Betts further refined the design, leading to the iconic Ty-Rap cable tie, patented in 1962, with lateral locking grooves and an embedded steel locking barb for enhanced security. The cable tie's success led to legal disputes, as its design closely resembled a British patent by Kurt Wrobel. Nevertheless, Thomas and Betts prevailed in the market, solidifying their claim as the inventors of the cable tie. The Ty-Rap cable tie evolved into specialized versions, including heat-resistant and space-grade variants. Offshoot products like Ty-Met, made of stainless steel, and Ty-Fast, a nylon tie with an integrated ratchet barb, gained popularity globally, earning the colloquial name "zip ties" or "tie wraps." Today, over 45 companies globally produce cable ties, with an estimated annual production of 100 billion units. Thomas and Betts, now ABB Installation Products, continue to be a key player in the cable tie market, with ongoing developments for niche applications. Maurus Logan, the visionary behind the cable tie, dedicated his career to innovation, filing six patent applications and rising to the role of Vice President of Research and Development. His legacy lives on as cable ties have become an integral part of our modern world, found everywhere from the ocean floor to the surface of Mars, silently playing a crucial role in powering our information-driven world and beyond. 👁‍🗨 CHECK OUT @ConnectionsMuseum https://www.youtube.com/watch?v=GTC_r9AIgdo 🌐 SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Surprising Story Of Aerial Refueling

jJ_i4OzgaOQ | 30 Dec 2023

The Surprising Story Of Aerial Refueling

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription In the early days of aviation both the civil and military world, a practical method for traversing large distances was highly sought after. While airframe and engine designs were constantly evolving, air-to-air refueling was seen as the only immediate solution to the range extension problem, particularly for military applications. The first attempts at air-to-air refueling were carried out as dangerous stunts performed by civilian pilots known as barnstormers at flying circuses. The first true systematic attempt of inflight refueling was conducted on October 3, 1920 in Washington D. Cabot of the United States Naval Reserve. Finally, in 1923, WW I veteran pilots Captain Lowell Smith and Lieutenant John Richter, would devise a method to deal with the flight duration limits that plagued them during combat. A few months later, numerous test flights were flown over a circular course, with the team achieving their first flight endurance record on June 27th, at 6 hours and 39 minutes of flight time. Using the refueling technique developed by Smith and Ricther, the tankers carried a 50 foot hose that would be lowered to the receiver aircraft, which itself was modified with a large fuel funnel that led to its fuselage tank. Throughout the entire flight, forty-two contacts were made with the tankers, with almost 5,000 gallons of gasoline and 245 gallons of oil being transferred. By 1935, Cobham's would demonstrate a technique known as grappled-line looped-hose air-to-air refueling. In this procedure, the receiver aircraft would trail a steel cable which was then grappled by a line shot from the tanker. The line was then drawn into the tanker, where the receiver's cable was connected to the refueling hose. Once the hose was connected, the tanker climbed slightly above the receiving aircraft where fuel would flow under gravity. By the late 1930s, Cobham company, Flight Refuelling Ltd or FRL would become the very first producer of a commercially viable aerial refueling system. In March of 1948, the USAF’s Air Material Command initiated the GEM program, in the hopes of developing long range strategic capabilities through the study of aircraft winterization, air-to-air refueling and advanced electronics. The air-to-air refueling program in particular was given top priority, within GEM. After a year of training and testing with the modified FRL air-to-air refueling system, it would be used by the B-50 Superfortress "Lucky Lady II" of the 43rd Bomb Wing to conduct the first non-stop around-the-world flight. The solution to the problem came in the form of a flying boom refueling concept. The flying boom aerial refueling system is based on a telescoping rigid fueling pipe that is attached to the rear of a tanker aircraft. The entire mechanism is mounted on a gimbal, allowing it to move with the receiver aircraft. In a typical flying boom aerial refueling scenario, the receiver aircraft rendezvous with the tanker, and maintain formation. The receiver aircraft then moves to an in-range position behind the tanker, under signal light or radio guidance from the boom operator. Once in position, the operator extends the boom to make contact with the receiver aircraft where fuel is then pumped through the boom. Simultaneously, Boeing would develop the world's first production aerial tanker, the KC-97 Stratofreighter. . Over the next few years, these Boeing would develop the first high-altitude, high-speed jet-engine powered flying-boom aerial tanker, the KC-135 Stratotanker. By 1949 Cobham had devised the first probe and drogue aerial refueling system. Probe-and-drogue refueling employs a flexible hose that trails behind the tanker aircraft. During aerial refueling, the drogue stabilizes the hose in flight and provides a funnel to guide the insertion of a matching refueling probe that extends from the receiver aircraft. When refueling operations are complete, the hose can is then reeled up completely into an assembly known as the Hose Drum Unit. Operational testing of the first probe-and-drogue refueling system began in 1950. On June 4th, 2021, The US Navy conducted its first-ever aerial refueling between a manned aircraft and an unmanned tanker, using a Boeing MQ-25 Stingray and a Navy F-18 Super Hornet. Conducted over Mascoutah, Illinois the 4 and half hour test flight performed a series of both wet and dry contacts with the UAV, with a total of 10 minutes of total contact time and transferring around 50 gallons of fuel. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Ingenious Evolution of Gyroscope Technology

UfstC_6xv1M | 23 Dec 2023

The Ingenious Evolution of Gyroscope Technology

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription Discover the incredible journey of gyroscopes in transforming navigation and the aerospace industry. From historic sea voyages to the cutting-edge technology in modern aviation and space exploration, this video unveils the fascinating evolution of gyroscopes. Dive into the origins with HMS Victory's tragic loss and John Serson's pioneering work, to the groundbreaking inventions of Bohnenberger, Johnson, and Foucault. Explore the fundamental principles of gyroscopes, their role in the development of gyrocompasses by Anschütz-Kaempfe, and their critical application in early 20th-century aviation and warfare technologies. Learn about the vital transition during World War II to sophisticated inertial navigation systems (INS) and their pivotal role in rocketry, especially in the German V2 and American Atlas rockets. Understand the mechanics of INS, the challenge of drift, and the advancements in computing that led to its refinement. Discover how the aviation industry embraced INS, from the B-52's N-6 system to the Delco Carousel in commercial aviation. Witness the emergence of new gyroscopic technologies like ring laser and fiber-optic gyroscopes, and their integration with GPS for unprecedented navigational accuracy. Explore the latest advancements in Micro-Electro-Mechanical Systems (MEMS) and their widespread application in consumer electronics. Finally, envision the future of gyroscopes in enhancing virtual reality, autonomous vehicles, and motion-based user interfaces. This comprehensive overview not only traces the history but also forecasts the exciting future of gyroscopes in our increasingly digital and interconnected world. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Incredible Story Of Randomness

iT20A4KQxyM | 23 Nov 2023

The Incredible Story Of Randomness

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription In this comprehensive exploration of randomness, we delve into its perplexing nature, historical journey, statistical interpretations, and pivotal role in various domains, particularly cryptography. Randomness, an enigmatic concept defying intuition, manifests through seemingly unpredictable sequences like coin flips or digits of pi, yet its true nature is only indirectly inferred through statistical tests. The historical narrative reveals humanity's earliest encounters with randomness in gaming across ancient civilizations, progressing through Greek philosophy, Roman personification, Christian teachings, and mathematical analysis by Italian scholars and luminaries like Galileo, Pascal, and Fermat. Entropy, introduced in the 19th century, unveiled the limits of predictability, especially in complex systems like celestial mechanics. Statistical randomness, derived from probability theory, relies on uniform distribution and independence of events in a sample space. However, its limitation lies in perceivable unpredictability, as exemplified by the digits of pi or coin flips, which exhibit statistical randomness yet remain reproducible given precise initial conditions. Information theory, notably Claude Shannon's work, established entropy as a measure of uncertainty and information content, showcasing randomness as the opposite of predictability in a system. Algorithmic randomness, introduced by von Mises and refined by Kolmogorov, measures randomness through compressibility but faces challenges due to computability. Martin-Löf's work extends this notion by defining randomness based on null sets. The integration of randomness into computer science led to the emergence of randomized algorithms, divided into Las Vegas and Monte Carlo categories, offering computational advantages. Encryption, crucial in modern communications, relies on randomness for secure key generation, facing challenges due to vulnerabilities in pseudorandom algorithms and hardware random number generators. The evolution of cryptography, from DES to AES and asymmetric-key algorithms like RSA, emphasizes the critical role of randomness in securing digital communications. While hardware random number generators harness inherent physical unpredictability, they face challenges regarding auditability and potential vulnerabilities. The future of randomness lies in embedded quantum random number generators, promising heightened security, while encryption algorithms adapt to counter emerging threats posed by quantum computing's properties. This in-depth exploration captures the historical, theoretical, and practical dimensions of randomness, highlighting its significance in diverse fields and its pivotal role in securing modern communications. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Science Of Foam

2KhKlHNlP4Y | 21 Oct 2023

The Science Of Foam

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription Explore the fascinating world of foam in this in-depth exploration of its history and properties. From its natural occurrences in sea foam and whipped egg whites to its critical role in modern manufacturing, foam has evolved over centuries. Learn about its structure, stability, and the essential role of surfactants in foam formation. Discover the historical journey of foam, from natural cellular solids like cork to the development of manufactured foams in the late 1800s. Dive into the creation of foam latex and the rise of polymeric foams, including the iconic Styrofoam and versatile polyurethane foams. Understand the environmental concerns surrounding foam products and the ongoing efforts to make them more sustainable. Explore exotic foam compositions like syntactic foams and metal foams, showcasing foam's diverse applications in extreme environments. Join us on this educational journey into the complex and intriguing world of foam. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Next Generation Of Stealth Materials

5v1ilFCOOCw | 23 Sep 2023

The Next Generation Of Stealth Materials

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription In October 2006, A team of British and U.S. scientists had demonstrated a breakthrough physical phenomena, then only known to science fiction; the world’s first working "invisibility cloak”. The team, led by Professor Sir John Pendry, created a small device about 12 cm across that had the intrinsic property of redirecting microwave radiation around it, rendering it almost invisible to microwaves. What made this demonstration particularly remarkable was that this characteristic of microwave invisibility was not derived from the chemical composition of the object but rather the structure of its constituent materials. The team had demonstrated the cloaking properties of a meta-material. WHAT ARE THEY A metamaterial is a material purposely engineered to possess one or more properties that are not possible with traditional, naturally occurring materials. Radiation can be bent, amplified, absorbed or blocked in a manner that far supersedes what is possible with conventional materials. PROPERTIES OR REFRACTION The refractive index of a material varies with the radiation’s wavelength, which in turn also causes the angle of the refraction to vary. Every known natural material possesses a positive refractive index for electromagnetic waves. Metamaterials however, are capable of negative refraction. HOW REFRACTION IS CONTROLLED Permittivity is a measure of how much a material polarizes in response to an applied electric field while magnetic permeability is the measure of magnetization that a material obtains in response to an applied magnetic field. As an electromagnetic wave propagates through the metamaterial, each unit responds to the radiation and the collective results of these interactions creates an emergent material response to the electromagnetic wave that supersedes what is possible with natural materials. FIRST CONCEPTS The first mention of the properties of metamaterials was in 1904, with the conceptualization of negative wave propagation by British mathematician Horace Lamb and British physicist Arthur Schuster. Veselago’s research included producing methods for predicting the phenomena of refraction reversal, in which he coined the term left-handed materials. ARTIFICIAL DIELECTRICS From this, the development of artificial dielectrics during the 1950s and 1960s, began to open up new ways to shape microwave radiation, especially for radar antennae design. Artificial dielectrics are composite materials made from arranged arrays of conductive shapes or particles, supported in a nonconductive matrix. Similar to metamaterials, artificial dielectric are designed to have a specific electromagnetic response, behaving as an engineered dielectric material. FIRST METAMATERIALS Pendry’s expertise in solid state physics had led him to be contracted by Marconi Materials Technology in order to explain the physics of how their naval stealth material actually worked. Pendry had discovered that the microwave absorption of the material did not come from the chemical structure of the carbon it was made from but rather the long, thin shape of the fibers. He had figured out how to manipulate a materials electric and magnetic response, effectively allowing for a method to engineer how electromagnetic radiation moves through a material. SUPERLENS By late 2000, Pendry had proposed the idea of using metamaterials to construct a superlens. Pendry theorized that one could be developed employing the negative refractive index behavior of a metamaterial. However, in practice, this proved to be an incredibly difficult task due to the resonant nature of metamaterials. By 2003, Pendry's theory was first experimentally demonstrated at microwave frequencies, by exploiting the negative permittivity of metals to microwaves. CLOAKING Composed of 21 alternating sheets of silver and a glasslike substance, the material, referred to as a fishnet, causes light to bend in unusual ways as it moves through the alternating layers. What made this particularly notable was that it operated on a wider band of radiation than their previous attempts. FUTURE OF CLOAKING Despite the ongoing research and relative success with microwave radiation, to date optical cloaking still remains elusive due to the technical challenges of manipulated light within a metamaterial Light moving through materials typically gets ab­sorbed until, at some point, the energy of the radiation falls off, making it a challenge to guide it’s propagation in a useful way. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Driving On Compressed Air: The Little-Known Compressed Air Revolution

fFoYPj3Ntzc | 26 Aug 2023

Driving On Compressed Air: The Little-Known Compressed Air Revolution

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription In March 2020, Reza Alizade Evrin and Ibrahim Dincer from the University of Ontario Institute of Technology's Clean Energy Research Lab pioneered an innovative vehicle prototype fueled by compressed air, using readily available components. This prototype showcased remarkable energy efficiency, reaching up to 90% of a lithium-ion electric vehicle's efficiency and predicting a range of around 140 kilometers. While surpassable by current electric vehicles, the real breakthrough was the exclusive use of compressed air as an energy source. The history of compressed air vehicles dates back to the early 19th century when the concept of harnessing compressed air's power for vehicles emerged. Despite early breakthroughs like Louis Mékarski's compressed air locomotive in the 1860s, practical applications were limited. Mining operations and tunnel constructions adopted compressed air vehicles due to their safety advantages, but they couldn't compete with internal combustion engines. Compressed air storage systems faced inherent flaws, with conventional methods wasting energy due to heat loss during compression and cooling during expansion. Adiabatic and isothermal storage techniques were explored to improve efficiency, particularly for utility power storage. Researchers like Evrin and Dincer delved into near-isothermal compressed air storage, enhancing thermodynamic limits for vehicle applications using phase change materials. Advantages of compressed air vehicles include potential fourfold energy storage compared to lithium-ion batteries, direct mechanical energy conversion, quiet and lightweight turbine-based motors, and sustainability due to minimal toxic materials and reduced manufacturing complexity. Tankage solutions vary between low-pressure and high-pressure systems, utilizing lightweight composite tanks that are safer and cheaper to produce compared to batteries. The challenge of designing efficient air motors led to innovations like EngineAir's Di Pietro Motor, addressing torque inconsistencies through a rotary positive displacement design. However, achieving consistent torque across pressure ranges remained an obstacle. Commercialization history saw ups and downs. French engineer Guy Negre proposed the idea in 1996, leading to prototypes like MDI's "OneCAT" and partnerships with companies like Tata Motors. However, challenges including safety concerns and governmental support for electric and hybrid vehicles hindered mass adoption. MDI's AirPod 2.0, introduced in 2019, featured hybrid refueling and improved speeds, yet production plans remained uncertain. Despite the journey's challenges, MDI persists in the pursuit of compressed air vehicle commercialization, aiming to revolutionize transportation with this sustainable technology. FOOTAGE Traveling Tom - 1906, HK Porter, Compressed air mine locomotive demonstration Infinite Composites Technologies Angelo Di Pietro SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Future of Auto Manufacturing: AI Driven Design

z8fYer8G3Y8 | 22 Jul 2023

The Future of Auto Manufacturing: AI Driven Design

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription The Czinger 21C hypercar concept incorporates a revolutionary brake node, a combination of braking system and suspension upright, using Divergent 3D's DAPS system. DAPS utilizes Metal Additive Manufacturing and generative design powered by AI to create highly optimized structures. Generative design explores numerous solutions based on defined parameters, producing innovative designs. It can optimize parts while considering various constraints and objectives. Generative design methods include Cellular Automata, Genetic Algorithms, Shape Grammar, L-Systems, and Agent-Based Models. Cellular Automata use mathematical models with discrete cells and predefined rules to create emergent patterns. Genetic Algorithms simulate natural selection to evolve solutions in iterative generations. Shape Grammar employs a vocabulary of basic shapes and rules to create diverse designs. L-Systems model growth and complex structures using symbols and iterative rules. Agent-Based Models simulate interactions of autonomous agents, producing emergent patterns and system-level dynamics. These generative design methods find application in various industries, including architecture, automotive, and aesthetics. They help optimize components, such as connecting rods, lattice patterns, taillights, and suspension systems, improving performance while reducing weight. However, the use of generative design is still developing, with advancements in AI and computational models continually expanding its capabilities. In the future, AI-driven generative design could revolutionize engineering and design processes, surpassing human capabilities and rapidly producing highly efficient and complex designs. It has the potential to redefine the roles of engineers and designers, leading to more innovative and optimized products in various fields. FOOTAGE AutoDesk - @Autodesk Softology - @Softology SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Overcoming The Rotary Engine’s Biggest Flaw

b4Cub8FPrsY | 24 Jun 2023

Overcoming The Rotary Engine’s Biggest Flaw

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription The inspiration for the Wankel rotary engine is derived from the geometric principle that when a circle is rolled on its own circumference along another circle that has double the radius, a curve known as an Epitrochoid is created. This curve forms the shape of the inner walls of the rotor housing. The rotor housing hosts all stages of the rotary engine’s combustion cycle, much like a cylinder in a conventional engine. In order to keep compression in the chamber of a Wankel engine, the three tips of the rotor must form gas-tight seals against the inner walls of the rotor housing. This is accomplished by seals at the three apexes of the triangle, known as apex seals. These seals are usually made of metal and are pushed against the wall housing by springs. Since the seals are in contact with the housing’s inner case, in order to reduce friction they’re covered in engine oil. Because the exposure of engine oil to the combustion process, a rotary engine burns oil by design. The amount of oil used is metered by a throttle-controlled metering pump. The three apexes of the triangular-shaped rotor move uniformly along the inside walls of the rotor housing, dividing the cavity between the rotor and the interior walls of the housing into three continually changing regions of volume. Because of the unique configuration of a rotary engine, they’re classified as variable-volume progressing-cavity systems. Each rotor has three faces and each face has three cavities of volume per housing. In effect, each face of the rotor «sweeps» its own volume as the rotor moves in an eccentric orbit within the housing. Each side of the rotor is brought closer to and then further away from the wall of the internal housing, compressing and expanding the combustion chamber. A rotor is effectively akin to a piston. Starting in the early 1960s, Mazda has released a slew of unique, Wankel rotary powered models such the Cosmo, RX-3 and three generations of the Mazda RX-7. The iconic history of Mazda and the evolution of the Wankel rotary engine began with a joint study contract between Mazda and the German car firm NSU. Which came equipped with a water-cooled single-rotor engine and standard front disc brakes, which differentiated it from other similar cars of the period. Early cars required an engine rebuild only after 50,000 kilometers or 31,000 miles. Many of these failures were attributed to poorly designed apex seal tips, a common weak point later realized in rotary engines. Since the seals are in contact with the housing’s inner case, in order to reduce friction they’re covered in engine oil. Because the exposure of engine oil to the combustion process, a rotary engine burns oil by design. Because of the direct contact of apex seal, the biggest obstacle engineers faced in initial designs were the chatter marks on the rotor housing’s sliding surfaces. To an extent, these carbon seals were self-lubricating, addressing the issues facing the rotor housing wall surface. They were also used in conjunction with an aluminum rotor housing, in which the walls were chrome-plated for durability. What made this possible was the new porous chrome plating on the interior walls of the rotor housing. Ths surface finish of this plating improved the effectiveness of the lubrication between the apex seal and the rotor. From 1975 -1980 it was discovered that the current apex seal version was subjected to high thermal and centrifugal loads during high RPM operation and under periods of high engine load. To rectify this issue, Mazda implemented a slight crown of . This additional crowning compensated for the rotor housing’s slight deformation under high loads, ensuring sufficient contact with the rotor housing walls. Mazda also improved the corner pieces by incorporating a spring design to keep the clearance of the rotor groove at a minimum. By the early 1980s, further refinements by Mazda led to the adoption a top-cut design that extended the main seal. The purpose was to reduce gas leakages at one end of the apex seal, where it would segment into two pieces. From 1985 to 2002, the apex seal had been further reduced in size to 2mm. Additionally, Mazda filled the center cavity of the spring corners with a heat-resistant rubber epoxy, adding additional sealing properties. This latest iteration of the apex seal design was used in Mazda’s iconic high output, low weight twin turbocharged 13B-REW engine. Made famous by the 3d generation RX-7, it was used until the engine was finally dropped from production and replaced with the Renesis engine which used its own apex seal design. The apex seal in the Renesis engine was now a two-piece design made from cast iron with a low carbon content. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The World Of Strange Computers

szTtg302Hic | 27 May 2023

The World Of Strange Computers

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription Explore the fascinating world of unconventional computers that defied the norms of their time, revolutionizing diverse fields from engineering to economics, torpedo guidance, digital logic, and animation. From Lukyanov's ingenious Water Integrator solving complex equations using water flow to Moniac's hydraulic macroeconomics modeling, delve into the Torpedo Data Computer's role in WWII, the conceptual marvel of Domino Computers, and the pioneering analog magic of Scanimate in producing early motion graphics. Witness how these unconventional machines shaped industries, solving complex problems in ways that predated the modern era of computing. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Evolution of Gasoline

BVa-RPNWO6k | 22 Apr 2023

The Evolution of Gasoline

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription Gasoline is a mixture of light hydrocarbons with relatively low boiling points that, at the time, had no significant commercial value and was even seen as dangerous, due to its high volatility. Because of this, It was initially considered a waste product and was often discarded and simply burned off. COMPOSITION OF GASOLINE Despite its public perception, gasoline is not a clearly defined compound but rather a homogenous blend of light to medium molecular weight hydrocarbons. The hydrocarbon types that commonly combine to form gasoline and contribute to its properties as a fuel, are paraffins, olefins, naphthene, and aromatics. Depending on the blend, gasoline can vary anywhere from 32 to 36 megajoules per liter. EARLY GASOLINE Early gasoline produced directly from distillation was known as straight-run gasoline. When gasoline containing sulfur is burned are a major contributor to smog, acid rain, and ground-level ozone. These early gasoline blends, by today’s standards would be unusable in the higher compression engines of today as even the most high-test blends would have an octane ratings below 70, with lesser quality blends going as low as 40. CRACKING By 1910, the rising demand for automobiles combined with the expansion of electrification, created a flip in the product demands of the petroleum industry, with the need for gasoline now beginning to supersede that of kerosene. Coined the Burton process, this technique thermally decomposes straight-run gasoline and heavier oils, cracking the heavier hydrocarbons and depleting their hydrogen to produce more lighter hydrogen rich hydrocarbons. The instability of fuel was also a concern, as the higher levels of unsaturated hydrocarbons produced by thermal cracking were reactive and prone to combining with impurities, resulting in gumming, further exacerbating the problem. CATALYTIC CRACKING In early 1920s, Almer McDuffie McAfee would develop a new refining process that could potentially triple the gasoline yielded from crude oil by existing distillation methods. Known as catalytic cracking, the process heats heavy hydrocarbon feedstock to a high temperature along with a catalyst in a reactor. The catalyst initiates a series of chemical reactions that break the hydrocarbon molecules apart into smaller fragments that are then further cracked and recombined to produce lighter, more desired hydrocarbons for gasoline. Catalytic cracked gasoline had a significantly higher olefin content, and more branched-chain and aromatic hydrocarbons than thermally cracked gasoline, which raised its octane rating. The catalyzing action also produced a fuel with lower sulfur and nitrogen content, which results in lower emissions when burned in engines. FLUID-CRACKING In an attempt to circumvent Houndry patents, Standard Oil began researching an alternative method to catalytic cracking, resulting in the development and fielding of the fluid based catalytic cracking process in the early 1940s. As the catalyst becomes deactivated by build up of carbon deposits caused by the cracking process, the spent catalyst is separated from the cracked hydrocarbon products and sent to a regeneration unit. HYDRO CRACKING During this time period, a new type of catalytic cracking process based on decades of research on hydrogenation, a reaction where hydrogen was used to break down large hydrocarbon molecules into smaller ones while adding hydrogen atoms to the resulting molecules. Its efficiency at producing higher yields of gasoline from heavier oil products led to it being adopted on a commercial scale by refineries around the world during the 1960s. POST LEAD After the phase-out of lead additives in gasoline, the petroleum industry switched to MTBE. MTBE in particular. This phase out of MTBE led to ethanol becoming the primary oxygenate and octane booster in gasoline by the early 2000s. ALKYLATION Beyond additives the process of alkylation also grew in its use to boost octane-ratings. This technique is used to produce alkylates, a high-octane blending component for gasoline. Much like other catalytic process, The acid catalyst is separated and recycled, while the alkylates are separated and unreacted isobutane recycled. The high-octane alkylate is then blended with other gasoline components. ISOMERIZATION Another similarly catalytic technique that began to grow in popularity is gasoline isomerization. This process typically focuses on the conversion of low-octane straight-chain paraffins found in light naphtha into branched-chain hydrocarbons that have a higher octane rating. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Terrifying Technology Inside Drone Cameras

CpLdL8ONEm4 | 11 Mar 2023

The Terrifying Technology Inside Drone Cameras

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription UAVs operate in the world of tactical intelligence, surveillance and reconnaissance or ISR, generally providing immediate support for military operations often with constantly evolving mission objects. Traditionally, airborne ISR imaging systems were designed around one of two objectives, either looking at a large area without the ability to provide detailed resolution of a particular object or providing a high resolution view of specific targets, with a greatly diminished capability to see the larger context. Up until the 1990s, wet film systems were used on both the U2 and SR-71. Employing a roll of film 12.7 cm or 5" wide and almost 3.2 km or 2 miles long, this system would capture one frame every 6.8 seconds, with a limit of around 1,6000 frame captures per roll. BIRTH OF DIGITAL The first digital imaging system to be used for reconnaissance was the optical component of the Advanced Synthetic Aperture Radar System or ASARS. Installed on the U-2 reconnaissance aircraft in the late 1970s, ASARS used a large, phased-array antenna to create high-resolution images of the ground below using radar. Complementing the radar, was an imaging system that used a Charge-coupled device or CCD camera to capture visible light images of the terrain being surveyed. This CCD camera operated in synchronization with the radar system and had a resolution of around 1 meter or 3.3 feet per pixel. A CCD sensor consists of an array of tiny, light-sensitive cells arranged in an array. When combined with the limitation of computing hardware of the time, their designs were generally limited to less than a megapixel, with resolutions as low as 100,000 pixels being found in some systems. CMOS By the early 1990s, a new class of imagining sensor called active-pixel sensors, primarily based on the CMOS fabrication process began to permeate the commercial market. Active-pixel sensors employ several transistors at each photo site to both amplify and move the charge using a traditional signal path, making the sensor far more flexible for different applications due to this pixel independence. CMOS sensors also use more conventional, and less costly manufacturing techniques already established for semiconductor fabrication production lines. FIRST WAMI Wide Area Motion Imagery takes a completely different approach to traditional ISR technologies by making use of panoramic optics paired with an extremely dense imaging sensor. The first iteration of Constant Hawk’s optical sensor was created by combining 6 - 11 megapixel CMOS image sensors that captured only visible and some infrared light intensity with no color information. At an altitude of 20,000 feet, the "Constant Hawk" was designed to survey a circular area on the ground with a radius of approximately 96 kilometers or 60 miles, covering a total area of over 28,500 square kilometers or about 11,000 square miles. Once an event on the ground triggers a subsequent change in the imagery of that region, the system would store a timeline of the imagery captured from that region. This now made it possible to access any event at any time that occurred within the system’s range and the mission’s flight duration. The real time investigation of a chain of events over a large area was now possible in an ISR mission. In 2006 Constant Hawk became the first Wide Area Motion Imagery platform to be deployed as part of the Army’s Quick Reaction Capability to help combat enemy ambushes and improvised explosive devices in Iraq. In 2009, BAE System would add night vision capabilities and increase the sensor density to 96 megapixels. In 2013, full color imagery processing capability would be added. The system was so successful that the Marine Corps would adopt elements of the program to create its own system called Angel Fire and a derivative system called Kestrel. ARGUS-IS As Constant Hawk was seeing its first deployment, several other similar systems were being developed that targeted more niche ISR roles, however one system in particular would create a new class of aerial surveillance, previously thought to be impossible. Called the ARGUS-IS, this DARPA project, contracted to BAE Systems aimed to image an area at such high detail and frame rate that it could collect "pattern-of-life" data that specifically tracks individuals within the sensor field. The system generates almost 21 TB of color imagery every second. Because ARGUS-IV is specifically designed for tracking, a processing system derived from the Constant Hawk project called Persistics was developed. Because this tracking can even be done backwards in time, the system now becomes a powerful tool for forensic investigators and intelligence analysis of patterned human behavior. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Most Complex System In Modern Cars

mxHJ3O4iudw | 11 Feb 2023

The Most Complex System In Modern Cars

▶ Visit https://brilliant.org/NewMind to get a 30-day free trial + the first 200 people will get 20% off their annual subscription An airbags, in its most elemental forms is an automotive safety restraint system designed to inflate a cushioning bag extremely quickly, then rapidly deflate it in a controlled manner, during a collision. They’re considered a passive restraint system, because unlike seatbelts, they require no interaction by the occupant for their operation. SYSTEM DESIGN An airbag system is fundamentally composed of one or more inflation mechanisms located primarily within the steering wheel for the driver and the upper dashboard for the front passenger. These inflation mechanisms are controlled by a centralized system that continuously monitors for impact events using as little one to dozens of sensors, depending on the system’s sophistication. Once this system detects an impact, one or several inflation mechanisms are pyrotechnically triggered by an electrical signal, causing a gas generating propellant to be ignited, rapidly inflating a bag that is folded within each inflation mechanism. While simple in concept, the difference between an airbag’s deployment protecting an occupant, and causing traumatic or even deadly injuries, comes down to the precise millisecond timing of its operation. ANATOMY OF AN COLLISION This incredibly narrow window to act within the first ⅓ of the entire collision duration is due to the airbags needing to deploy before the occupants contact any portion of the vehicle interior as it crushes, and before the limits of the seat belt’s stretch are reached. The airbag’s inflation must also be timed so that it is fully inflated before the occupant engages with it, to minimize trauma from the inflation process itself. COMPRESSED AIR Both systems were based on a store of compressed airbags that would inflate the airbag using mechanical trigger valves. By the 1960s, practical airbag systems for vehicles were being explored by the major manufacturers and from this decade of research it was determined that compressed air systems were far too slow reacting to be effective. These flaws made the mechanical compressed air airbag system completely unsuitable for commercial adoption. A BREAKTHROUGH Allen K Breed would make a breakthrough that finally made airbags commercially viable, with the development of the ball-in-tube electromechanical crash detection sensor. When a collision occurs, the ball is separated from the magnet, moving forward to electrical contacts and closing the trigger circuit. Breed also pioneered the use of a gas-generator as a method for rapidly inflating an airbag. Breed devised an inflation mechanism that used just 30-150 grams of the solid-fuel, sodium azide as a gas generating agent for airbags. The sodium azide would then exothermically decompose rapidly to sodium and nitrogen, fully inflating the airbag with the resultant gas, within just 60-80 milliseconds. AIRBAG HISTORY Any car sold in the United States must now be certified to meet the Federal Motor Vehicle Safety Standards or FMVSS, a comprehensive set of regulations on vehicle design, construction, and performance. The NHTSA began to prepare for a second wave of mandates during the 1970’s, specifically targeting a push for new safety technologies, with the airbag being a prime technology for regulatory compliance. The first mass-produced vehicle to have an airbag system was introduced on a government-purchased in 1973. Called the The Air Cushion Restraint System or ACRS, General Motors employed impact sensors mounted in the vehicle's front bumper in order to deploy the airbags embedded in the steering wheel, for the driver, and in the dashboard for the passenger. By 1984, the NHTSA would reach a compromise with the industry, and with this agreeing to the introduction of a passive restraint system mandate for all new vehicles produced in the US, beginning on April 1, 1989. Manufacturers had 2 options, either an automatic seat belt system or the airbag. The 1980s saw the shift of the industry's view of airbag as a primary safety system to one designated as a supplemental restraint system or SRS, or the less common designation of supplemental inflatable restraints or SIR. THE NEXT WAVE OF AIRBAG TECHNOLOGY This proliferation led to the development of a new generation of airbag systems during the 1990s that overcame the flaws of earlier systems through the use of recent breakthroughs in the semiconductor industry. ALGORITHMIC CRASH DETECTION The electronic control unit that formed the backbone of airbag systems, called the airbag control unit or ACU, would now become an embedded computer, relying on a fusion of MEMS sensor data and other vehicle inputs, to employ algorithms that could now manage a larger spectrum of collision types and inflation response profiles. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Most Tortured Part In An Engine

Xy4tKWHTpY8 | 07 Jan 2023

The Most Tortured Part In An Engine

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription A modern head gasket is an intricate hybrid mechanical seal engineered to fill the space between a reciprocating engine’s head and block. SEALING ENGINE LUBRICANT A head gasket must seal the passages that carry engine oil between the block and the head. Engine oil can vary dramatically in viscosity and temperature, ranging from the extreme lows of frigid ambient temperature to as high as 135°C or 275°F. SEALING ENGINE COOLANT Similar to engine oil, on most water-cooled engines a head gasket must also seal the passages that carry engine coolant between the head and the block. When compared to engine oil, engine coolant has a relatively consistent viscosity, with a lower maximum temperature of around 120°C or about 250°F, with normal operation seldom reaching above 140°C or 220°F. Much like with engine oils, the materials that seal engine coolant, on top of thermal cycling and movement, must deal with the corrosive properties of engine coolant. SEALING COMBUSTIONS GASSES Sealing combustion gasses are, by far, the most brutal and critical requirement of a head-gasket. A head gasket forms part of the combustion chamber and If this seal is compromised, the affected cylinder will lose the ability to produce a normal combustion sequence. Depending on the nature of this failure, the cylinder may also consume or cross contaminant other engine fluids. STABILITY A head-gasket must be deformable enough to maintain a seal between the imperfections of the head and block surfaces. In addition to these forces, head-gaskets have to function under the dynamics and extreme mechanical stresses of combustion pressure. The head bolts that fasten the head to the block are also typically not symmetrically spaced, creating an unevenly distributed clamping force across the gasket. With each of these bolts exerting a force of up to 4,500kg or about 10,000 lbs. Beyond these expectations, they must also be durable and capable of lasting across a significant portion of the engine’s life with little to no maintenance. FIRST HEAD GASKETS With the introduction of the internal combustion engines in the 1860s, almost every type of elastic material ever used within steam engines was experimented to seal combustion. As the internal combustion engine transitioned from its experimental early days to a mass produced power plant, copper would become a popular material for these early head gaskets. Their relative motion would create inconsistencies in the clamping force along the gasket’s surface. This was such a problem, that In the early days of motorsport head-gasket failure was the most common reason for race cars to not finish a race. NEW GASKET TECHNOLOGY As the automotive industry began to flourish in the 1920s and 30s, less costly, mass-production friendly head-gasket designs were explored. One durable yet relatively inexpensive option was the steel shim head-gasket. Embossments are stamped, raised regions on critical sealing areas off a gasket that created a smaller contact point. COMPOSITE GASKET The beater-add process offered a new lower cost gasket material option that would lead to manufacturers eventually introducing the composite head-gasket in the late 1940s. Metal beads, called fire rings, are created within the gasket’s metal structure to seal the combustion chamber and protect the elastomer material from overheating. The non-metallic surface of the gasket is then impregnated with a silicon based agent to seal any pores and prevent the gasket from swelling when it comes in contact with liquids. Some designs may even incorporate seal elements made from a high temperature, chemical-resistant fluorocarbon based elastomer material called Viton. MLS HEAD GASKET In 1970, Japanese gasket maker Ishikawa was issued the first patent for a revolutionary, new type of head-gasket, called the multi-layer steel or MLS head gasket. They effectively combine all of the benefits of previous gasket technologies into an extremely durable and adaptable component. The outer surfaces of the gasket are typically coated with a thin, fluorocarbon based Viton layer, in targeted areas, to aid in surface sealing. OTHER GASKETS TECHNOLOGIES The elastomeric head-gasket is an example of a cost-reduction focused design. These gaskets use a single steel shim with a beaded coating of an elastomeric material such as silicone or Viton for fluid sealing. On the opposite end of the performance spectrum, are modern solid copper head-gaskets. These groves carry a stainless steel O-ring that when combined with a solid-copper head-gasket capable of sealing in some of the highest combustion pressures found within reciprocating engines. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

How The Most Hated Auto Part Changed The World

gYzzdPBCrxw | 21 Dec 2022

How The Most Hated Auto Part Changed The World

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription According to the Environmental Protection Agency’s estimates, in 2019, 7% of the light-duty vehicles in the United States did not comply with their mandated vehicle emission regulations. Even more astonishing, is the fact that one specific component on these vehicles accounts for about 68% of these compliance failures. HISTORY OF SMOG Though the catalytic converter has become the primary mechanism of the automobile industry for controlling exhaust emissions in internal combustion engines, its origin is a byproduct of industrialization as a whole. During the turn of the 20th century, the smog created in urban areas by factory smokestacks triggered the first concerns for air quality. As the automobile and the internal combustion engine became more abundant, their impact on air quality grew more worrisome., During the 1940s, In the United States the growing problem of urban smog, specifically in the Los Angeles area prompted the French mechanical engineer Eugene Houdry, to take interest in the problem. Houdry was an expert in catalytic oil refining and had developed techniques for catalytically refining heavy liquid tars into aviation gasoline. WHAT IS SMOG The exhaust of all internal combustion engines used on vehicles is composed primarily of three constituent gases, nitrogen, carbon dioxide, and water vapor. In lean operating modes of gasoline engines and in diesel engines, oxygen is also present. Diesel engines by design generally operate with excess air, which always results in exhausted oxygen, especially at low engine loads. The nitrogen and oxygen are primarily pass-throughs of atmospheric gases. While the carbon dioxide and water vapor are the direct products of the combustion process. Depending on the engine type and configuration, these harmless gases form 98-99% of an engine’s exhaust. However, the remaining 1-2% of combustion products comprise thousands of compounds, all of which to some degree, create air pollution. The primary components of these pollutants, carbon monoxide, and nitrogen oxides, are formed within the highly reactive, high-temperature flame zone of the combustion cycle. While unburned and partially oxidized hydrocarbons tend to form near the cylinder walls where the combustion flame is quenched. Particulate matter, especially in diesel engines, is also produced in the form of soot. In addition to this, engine exhaust also contains partially burned lubricating oil, and ash from metallic additives in the lubricating oil and wear metals. WHY CATALYTIC CONVERTERS In 1970, the United States passed the Clean Air Act, which required all vehicles to cut its emissions by 75% in only five years and the removal of the antiknock agent, tetra-ethyl lead from most types of gasoline. THE FIRST CONVERTER Modern automotive catalytic converters are composed of a steel housing containing catalyst support called a substrate, that’s placed inline with an engine’s exhaust stream. Because the catalyst requires a temperature of over 450 degrees C to function, they’re generally placed as close to the engine as possible to promote rapid warm-up and heat retention. On early catalytic converters, the catalyst media was made of pellets, placed in a packed bed. These early designs were restrictive, sounded terrible, and wore out easily. During the 1980s, this design was superseded by a cubic ceramic-based honeycomb monolithic substrate, coated in a catalyst. These new cores offered better flow and because of their much larger surface area, exposed more catalyst material to the exhaust stream. The ceramic substrate used is primarily made of a synthetic mineral known as cordierite. TYPES OF CATS The first generation of automotive catalytic converters worked only by oxidation. These were known as two-way converters as they could only perform two simultaneous reactions - the oxidation of carbon monoxide to carbon dioxide and the oxidation of hydrocarbons to carbon dioxide and water. By 1981, "three-way" catalytic converters had superseded their two-way predecessor. Three-way catalytic converters induce chemical reactions that reduce nitrogen oxide to harmless nitrogen. This reaction can occur with either carbon monoxide, hydrogen, or hydrocarbons within the exhaust gas. While three-way catalytic converters are more efficient at removing pollutants, their effectiveness is highly sensitive to the air-fuel mixture ratio. For gasoline combustion, this ratio is between 14.6 and 14.8 parts air to one part fuel. Furthermore, they need to oscillate between lean and rich mixtures within this band in order to keep both reduction and oxidation reactions running. Because of this requirement, computer-controlled closed-loop electronic fuel injection is required for their effective use. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Incredible Technology Behind Sandpaper

kTiIFzhxhq4 | 03 Dec 2022

The Incredible Technology Behind Sandpaper

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription Sandpaper belongs to a class of abrasive products known as coated abrasives. These products are composed of an abrasive element bonded to a backing material such as paper, fabrics, rubber, metals or resins and they generally possess some degree of flexibility. King Solomon is mentioned to have used a mysterious worm or an abrasive substance called the Shamir that had the power to cut through or disintegrate stone, iron and diamond. In the 13th century, Chinese craftsmen were known to bond sand, crushed shells and sharp seeds onto parchment with natural gum. Other notable natural substances that have been used as abrasive tools include Shark skin, Coelacanth scales, and boiled and dried rough horsetail plan. INDUSTRIAL ERA After mastering the process, Oakley would go on to found John Oakey & Sons Limited in 1833 with the goal of mechanizing the process and within a decade Oakley had not only developed new adhesive techniques and manufacturing techniques that enabled the mass production of sandpaper but also created the first glass based coated abrasives. These products used small grains of ground-up glass or garnet called frit that are far more durable than sand, and also retain a sharp-edged structure as it wears down, producing a longer lasting abrasive cutting action. An initial attempt of producing their own grinding wheels was met with little success so the company, now branded as 3M soon transitioned into the coated abrasives industry. 3M’s initial venture into the market using natural abrasives was still plagued with quality issues and its reputation began to suffer. Three-M-ite was a cloth backed coated abrasive that relied on a new class of synthetic abrasives. These abrasives were a direct result of the advent of electric furnace technology that allowed a combination of base materials to be fused by heating them to temperatures above 2000°C or 3600°F, forming new crystal structures with favorable abrasive properties . NEW TYPES OF SANDPAPER In 1921, the company introduced the world’s first water-resistant coated abrasive called Wetordry. When bonded to a waterproof paper backing and used with water, silicon carbide sandpaper dramatically enhanced many of the key properties that define the effectiveness of a coated abrasive. HOW SANDPAPER WORKS The effectiveness of this action is highly dependent on the shape of the abrasive grain, with sharper edges producing more localized pressure at the interface points of both materials. The durability of a sandpaper is primarily determined by the relative hardness between the abrasive and the work material, the adhesion properties and size of the abrasive grain or grit size, and its ability to resist loading, where ejected material is trapped between the grains. A NEW AGE OF SYNTHETIC ABRASIVES Alumina-Zirconia is an incredibly tough and hard abrasive that offers nearly twice the performance of aluminum oxide in both efficiency and durability. It was also relatively easy to mass manufacture and quickly became a popular choice for metal working abrasive products. SOL-GEL CERAMICS In the early 1980’s, a revolutionary process that would dramatically improve abrasive performance would be introduced by 3M with the industry's first steps into nanotechnology. This new class of ceramic nanoparticle abrasives are produced using a method called the sol-gel process. This new abrasive became the foundation of their new Cubitron product line, and it would soon gain wide acceptance in the metalworking industry both in coated product form and as bonded grinding tooling. MICROREPLICATION In both synthetic and natural grain abrasives, the inconsistent particle shape of crushed grain creates inconsistent grinding and plowing action on the workpiece. These first trials in shape manipulation initially produced a coarsely shaped repeating pyramid mineral that was initially introduced in 1992 as a low grit metalworking aluminum oxide based product called 3M Trizact. By the turn of the century, 3M would introduce a new class of product line based on precision grain shape or PSG technology. In this process a casting film is used to roll a microstructure onto a wet uncured abrasive gel coating. As this occurs a combination of UV light and heat is applied under the roller’s pressure, curing the abrasive in its designed structure. Microreplication would first be used to further refine the Trizact product line. Cubitron II utilized a unique standing ceramic aluminum oxide triangle microstructure that not only had an extremely sharp tip that would cut through the work material instead of plowing through it , but by design, would fracture to produce a new sharp edge as it wore, effectively becoming a self sharpening grain. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Story Of Fuel Injection

N0RIGWUnVFc | 23 Nov 2022

The Story Of Fuel Injection

This is the story of how fuel injection transformed from it's simple beginnings as a mechanism to burn fuel oil to the complex computer driven integrated fuel management systems found on today's vehicles.

The Truth About Self Driving Cars

d5TiaIYdug4 | 02 Nov 2022

The Truth About Self Driving Cars

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription Almost a decade ago, a sizable list of tech companies, collectively wielding over $100 billion in investment, asserted that within five years the once-unimaginable dream of fully self-driving cars would become a normal part of everyday life. These promises, of course, have not come to fruition. Despite this abundance of funding, research and development, expectations are beginning to shift as the dream of fully autonomous cars is proving to be far more complex and difficult to realize than automakers had anticipated. THE LAYERS OF SELF DRIVING Much like how humans drive a vehicle, autonomous vehicles operate using a layered approach to information processing. The first layer uses a combination of multiple satellite based systems, vehicle speed sensors, inertial navigation sensors and even terrestrial signals such as cellular triangulation and Differential GPS, summing the movement vector of the vehicle as it traverses from its start waypoint to its destination. The next layer is characterized by the process of detecting and mapping the environment around the vehicle both for the purposes of traversing a navigation path and obstacle avoidance. At present, the primary mechanisms of environment perception are laser navigation, radar navigation and visual navigation. LIDAR In laser navigation, a LIDAR system launches a continuous laser beam or pulse to the target, and a reflected signal is received at the transmitter. By measuring the reflection time, signal strength and frequency shift of the reflected signal, spatial cloud data of the target point is generated. Since the 1980s, early computer based experiments with autonomous vehicles relied on LIDAR technology and even today it is used as the primary sensor for many experimental vehicles. These systems can be categorized as either single line, multi-line and omnidirectional. RADAR The long-range radars used by autonomous vehicles tend to be millimeter wave systems that can provide centimeter accuracy in position and movement determination. These systems, known as Frequency modulated continuous wave RADAR or FMCW, continuously radiate a modulated wave and use changes in phase or frequency of the reflected signal to determine distance. VISUAL PERCEPTION Visual perception systems attempt to mimic how humans drive by identifying objects, predicting motion, and determining their effect on the immediate path a vehicle must take. Many within the industry, including the visual-only movement leader Tesla, believe that a camera centric approach, when combined with enough data and computing power, can push artificial intelligence systems to do things that were previously thought to be impossible. AI At the heart of the most successful visual perception systems is the convolutional neural network or CNN. Their ability to classify objects and patterns within the environment make them an incredibly powerful tool. As this system is exposed to real world driving imagery, either through collected footage or from test vehicles, more data is collected and the cycle of human labeling of the new data and training the CNN is repeated. This allows them to both gauge distance and infer the motion of objects as well as the expected path of other vehicles based on the driving environment. At the current state of technology, the fatal flaw of autonomous vehicle advancement has been the pipeline by which they’re trained. A typical autonomous vehicle has multiple cameras, with each capturing tens of images per second. The sheer scale of this data, that now requires human intervention and the appropriate retraining now becomes a pinch point of the overall training process. DANGERS Even within the realm of human monitored driver assistance, in 2022 over 400 crashes in the previous 11 months involving automated technology have been reported to the National Highway Traffic Safety Administration. Several noteworthy fatalities have even occurred with detection and decision making systems being identified as a contributing factor. COUNTERPOINT While the argument could be made that human error statistically causes far more accidents over autonomous vehicles, including the majority of driver assisted accidents, when autonomous systems do fail, they tend to do so in a manner that would otherwise be manageable by a human driver. Despite autonomous vehicles having the ability to react and make decisions faster than a human, the environmental perception foundation these decisions are based on are so distant from the capabilities of the average human that trust in them still lingers below the majority of the public. -- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Defending Earth From Asteroids

8ZMJfx3lRVg | 06 Oct 2022

Defending Earth From Asteroids

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription The Tunguska event, it was believed to be caused by the air burst of an asteroid or comet about 50–60 meters or 160–200 ft in size, at an altitude of 5–10 kilometers or about 3–6 miles. It was estimated that the asteroid had a kinetic energy of around 15 megatons, or the equivalent to that of an explosion of 1,000 Hiroshima-type atomic bombs. It’s even been estimated that the explosion had caused a deceleration of the Earth's rotation relative to the rest of the Solar System by 4 microseconds. This explosion, caused by a near-earth asteroid about 20 meters or 66 ft in size, was estimated to have released the energy equivalent of around 500 kilotons of TNT. The Chelyabinsk event is a reminder of the destructive power of even small asteroids, and highlights both the frequency of these events as well as the importance of the need to to identify and track these potential threats. PHO PHO’s are defined as near earth objects, such as an asteroid or a comet, that have an orbit which approaches the earth at a distance of 0.05 astronomical units or 19.5 lunar distances, or less. 85% these asteroids are known as Apollo asteroids, as they hold an orbit that keeps within the inner solar system. DETECTING POTENTIAL THREAT OBJECTS They scan the sky slowly, on the order of once a month but produce deeper, more highly resolved data. Warning surveys, in contrast, utilize smaller telescopes to rapidly scan the sky for smaller asteroids that are within several million kilometers from earth. These dedicated survey installations first started to appear around the late 1990s and were initially clustered together in a relatively small part of the Northern Hemisphere. Initiated in 2015, This robotic astronomical survey and early warning system located in the Hawaiian islands is optimized for detecting smaller near-Earth objects a few weeks to days before they impact Earth. Further NASA funding had brought the system to the Southern hemisphere with two additional telescopes becoming operational in early 2022 in South Africa. At present, several other southern hemisphere based surveys are also under construction. In addition to ground based surveys, the Wide-field Infrared Survey Explorer or WISE infrared telescope, in earth’s orbit, was tasked with a 4 month extension mission called NEOWISE, to search for near-earth objects using its remaining capabilities. While this initial extension occurred in 2010, NASA had reactivated the mission in 2013 for a new three-year mission to search for asteroids that could collide with Earth, and by July 2021, NASA would reactivate NEOWISE once again, with another PHO detection mission extending until June of 2023. Currently, a replacement space-based infrared telescope survey system called the NEO Surveyor is under development with an expected deployment in 2026. DART MISSION DART was launched on November, 24 2021 on a dedicated Falcon 9 mission. The mission payload along with Falcon 9's second stage was placed directly on an Earth escape trajectory and into heliocentric orbit when the second stage reentered for a second escape burn. Despite DART carrying enough xenon fuel for its Ion thruster, Falcon 9 did almost all of the work, leaving the spacecraft to perform only a few trajectory-correction burns with simple chemical thrusters for most of the journey. On 27 July 2022, the DRACO camera detected the Didymos system from approximately 32 million km or 20 million mi away and began to refine its trajectory. These captured images were transmitted in real time to earth using the RLSA communication system. A few minutes before impact DART performed final trajectory corrections. This ultimately changes the overall orbit of the asteroid system. An asteroid on a hypothetical collision course with earth would only require a path shift of 6,500 km to avoid the earth, a tiny amount relative to 10s the millions of kilometers it would travel orbiting the sun. LICACUBE Built to carry out observational analysis of the Didymos asteroid binary system after DART's impact, it was the first deep space mission to be developed and autonomously managed by an Italian team. HERA - FOLLOW UP In October 2024, the ESA will launch the Hera mission with its primary objective being the validation of the kinetic impact method to deviate a near-Earth asteroid in a colliding trajectory with Earth. Hera will fully characterize the composition and physical properties of the binary asteroid system including the sub-surface and internal structures. Hera is expected to arrive at the Didymos system in 2026. ---- SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Story of Brakes

Z-F3NDGeu2s | 12 Sep 2022

The Story of Brakes

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription On Sep 23th, 2019, a new world record for 0-400-0 km/h was set at Råda airfield in Sweden by the Koenigsegg Regera . During this attempt, the Regera averaged around 1.1MW of dissipation during the braking phase, the system dissipated enough energy to power the average American home for just under 2 hrs. On almost every powered wheeled vehicle the brake system produces more deceleration force than the drivetrain’s acceleration force. ORIGINS The first wheeled vehicle brake systems consisted simply of a block of wood and a lever mechanism. To stop a vehicle, the lever was pulled, forcing the block of wood to grind against the steel rim of the wheel. Wooden brakes were commonly used on horse-drawn carriages and would even be used on early steam-powered cars that were effectively steam powered carriages. DRUM BRAKES The first brake system specifically designed for cars with pneumatic tires would be developed from an idea first devised by Gottlieb Daimler. Daimler’s system worked by wrapping a cable around a drum coupled to a car’s wheel. As the cable was tightened, the wheel would be slowed down by friction. While it was far more responsive than a wooden block, the exposed friction material of the external design made it less effective when exposed to the elements. This idea evolved into the drum brake with a fixed plate and two friction shoes. These early systems used a mechanical cam that, when rotated, would apply a force through the web to the lining table and its friction material. On drum brakes, the shoe located towards the front of the vehicle is known as the primary shoe while the rearward one is designated the secondary shoe. MASTER CYLINDER At the drunk brake, a hydraulic cylinder containing two pistons replaces the cam mechanism, applying a force outwards on the brake shoes as pressure builds within the system. In hydraulic brake systems, a combination of rigid hydraulic lines made from either steel or a nickel-copper alloy and flexible reinforced rubber hoses are used to transfer fluid pressure between the master cylinder and the brake cylinders. Hydraulics also increased safety, through redundancy by allowing the brake system to be split into two independent circuits using tandem master cylinders. Four wheel-hydraulic brakes would first appear on a production car with the 1921 Duesenberg Model A though Rickenbacker would be the first manufacturer to offer them on vehicles that were mid-priced and more mass-appealing, in 1922. Shortly thereafter, other manufacturers would adopt hydraulic brakes and it quickly became the industry standard. VACUUM BOOSTER Many of these ideas involved using compressors to pressurize either air or hydraulic fluid and in order reduce the force needed by an operator to actuate a vehicle's brakes. First introduced by the Pierce-Arrow’s motor car company in 1928, this system, originally designed for aviation, uses the vacuum generated by an engine’s air aspiration to build a vacuum within a device known as a brake vacuum servo. By the 1930s, vacuum-assisted drum brakes began to grow in popularity. DISC BRAKES The next leap in braking technology got its start in England in the late 1890s with the development of a disc-type braking system by the Lanchester Motor Company. This system used a cable operated clamping device called a caliper that would grab a thin copper disc that was coupled to the wheel, in order to slow its rotation. By 1955, Citroën would introduce the Citroen DS, the first true mass-production car to field disc brakes. For the vast majority of modern disc-brakes systems, the disc or rotor is made from gray cast iron. ABS These systems attempt to modulate brake pressure to find the optimal amount of braking force the tires can dynamically handle, just as they begin to slip. In most situations, maximum braking force occurs when there is around 10–20% slippage between the braked tire’s rotational speed and its contact surface. By the early 1950’s the first widely used anti-skid braking system, called Maxaret, would be introduced by Dunlop. It would take the integration of electronics into braking to make the concept viable for cars. As the wheel’s rotation starts to accelerate as it transitions out of braking the controller rapidly increases hydraulic pressure to the wheel once again until it sees the deceleration again. COMPOSITES Around the early 2000’s a derivative material known as carbon fiber-reinforced silicon carbide would start appearing in high end sports cars. Called carbon-ceramic brakes, they carry over most of the properties of carbon-carbon brakes while being both more dense and durable and they possess the key property of being effective even at the lower temperature of road car use. ------ SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Quantum Programming - Part 1

2Eswqed8agg | 05 Aug 2022

Quantum Programming - Part 1

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription INTRO In modern digital computers, these instructions resolve down to the manipulation of information represented by distinct binary states. These bits may be abstractly represented by various physical phenomena, such as by mechanical, optical, magnetic, or electric methods and the process by which this binary information is manipulated is also similarly versatile, with semiconductors being the most prolific medium for these machines. Fundamentally, a binary computer moves individual bits of data through a handful of logic gate types. LIMITATIONS OF ALGORITHMS In digital computing, binary information moves through a processing machine in discrete steps of time. This is known as an algorithm’s complexity. An example of such an algorithm would be one that determines if a number is odd or even. These are known as linear time algorithms and they execute at a rate that is directly correlated to the size of the algorithm’s input. This characteristic becomes obvious within a basic addition algorithm. Because the number of steps, and inherently the execution time is directly determined by the size of the number inputs, the algorithm scales linearly in time. Constant and linear time algorithms generally scale to practical execution times in common use cases, however, one category of algorithm in particular suffers from the characteristic of quickly becoming impractical as it grows. These are known as an exponential time algorithm and they pose a huge problem for traditional computers as the execution time can quickly grow to an impractical level as input size increases. QUBIT Much like how digital systems use bits to express their fundamental unit of information, quantum computers use an analog called a qubit. Quantum computing by contrast, is probabilistic. It is the manipulation of these probabilities as they move between qubits that form the basis quantum computing. Qubits are physically represented by quantum phenomena. HOW QUANTUM PROCESSING WORKS A qubit possesses an inherent phase component, and with this characteristic of a wave, a qubit’s phase can interfere either constructively or destructively to modify its probability magnitudes within an interaction. BLOCH SPHERE A Bloch sphere visualizes a qubit’s magnitude and phase using a vector within a sphere. In this representation, the two, classical bit states are located at the top and bottom poles where the probabilities become a certainty, while the remaining surface represent probabilistic quantum states, with the equator being a pure qubit state where either classical bit state is possible. When a measurement is made on a qubit, it decoheres to one of the polar definitive state levels based on its probability magnitude. PAULI GATES Pauli gates rotate the vector that represents qubit’s probability magnitude and phase, 180 degrees around the respective x, y and z axes of its Bloch sphere. For the X and Y gate, this effectively inverts the probability magnitude of the qubit while the Z gate only inverts its phase component. HADAMARD GATES Some quantum gates have no classic digital analogs. The Hadamard gate, or H gate is one of the most important unary quantum gates, and it exhibits this quantum uniqueness. Take a qubit at state level 1 for example. If a measurement is made in between two H gates, the collapsing of the first H gate’s superposition would destroy this information, making the second H gate’s effect only applicable to the collapsed state of the measurement. OTHER UNARY GATES In addition to the Pauli gates and the Hadamard Gate, two other fundamental gates known as the S gate and T gate are common to most quantum computing models. CONTROL GATES Control gates trigger a correlated change to a target qubit when a state condition of the control qubit is met. A CNOT gate causes a state flip of the target qubit, much like a digital NOT gate, when the control qubit is at state level of 1. Because the control qubit is placed in a superposition by the H gate, the correlation created by entanglement through the CNOT gate, also places the target qubit into a superposition. When the control or target qubit state is collapsed by measurement the other qubits' state is always guaranteed to be correlated by the CNOT operation. CNOT gates are used to create other composite control gates such as the CCNOT gate or Toffoli gate which requires two control qubits at a 1 state to invert the target qubit, the SWAP gate which swaps two qubit states, and the CZ gates which performs a phase flip. When combined with the fact that a qubit is continuous by nature and has infinite states, this quickly scales up to a magnitude of information processing that rapidly surpasses traditional computing. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Plot To Eliminate Cold War Scientists

-LUPnrL1b8M | 30 Jun 2022

The Plot To Eliminate Cold War Scientists

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription During the 1980s, amid the peak of the cold war’s technology race, a series of peculiar deaths among scientists working in Britain's defense industry began to baffle investigators. Most of the victims were research staff of Marconi Electronic Systems, with the majority being computer scientists working on defense projects associated with US Strategic Defense Initiative research and development. Furthermore, many of these deaths were under bizarre circumstances with their underlying causes ruled as undeterminable. While the Marconi deaths grabbed the headlines, it was also accompanied by other suspicious deaths throughout the defense industry of Europe. In 1986, several West German scientists working on projects tied to the Strategic Defense Initiative were also found dead under mysterious circumstances. All of which had been involved either directly or peripherally in the Strategic Defense initiative program and its related projects. Among them, UK’s Computer Weekly correspondent Tony Collins, would file a series of noteworthy stories investigating the deaths. In 1990 Collins would go on to publish his book, ‘Open Verdict’, chronicling the series of deaths and the suspicious anecdotal evidence that tied them together. However, despite the overwhelming evidence of a clandestine plot to hinder the UK’s defense industry, no firm conclusions as to its true nature was ever uncovered. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Evolution Of Automotive Paint

b_hgPinCZks | 02 Jun 2022

The Evolution Of Automotive Paint

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription VARNISH At the dawn of the automotive industry, early motor-vehicles were painted in a manner similar to both wooden furniture and horse-drawn carriages of the time. A varnish-like product was brushed onto the vehicle’s surfaces and subsequently sanded and smoothed. After multiple layers of varnish were established, the vehicle was then polished. Varnishes are generally composed of a combination of a drying oil, a resin, and a solvent. LACQUERS The first true automotive specific coatings would emerge in the early 1920s as a result of an accidental discovery. This liquid became the basis for nitrocellulose lacquer, a product that would become a popular staple of the automotive finishing industry for decades to come. Nitrocellulose was the first man-made plastic and it was created in 1862 by Alexander Parkes. Dupont chemist, Edmund Flaherty, would go on to refined the use of nitrocellulose dissolved in a solvent, to create a system that used a combination of naphtha, xylene, toluene, acetone, various ketones, and plasticizing materials that enhance durability and flexibility, to create a fast drying liquid that could be sprayed. Nitrocellulose lacquer has the advantage of being extremely fast drying, and it produces a tougher and more scratch resistant finish. ENAMELS By the 1930s, the development of alkyd enamel coatings would offer a significant enhancement over the properties of existing lacquers. This reaction occurs between the fatty-acids of the oil-portion of the resin and oxygen from the surrounding air, creating a durable film as the solvent evaporates. ACRYLICS In the 1950s, a new acrylic binder technology would be introduced that would transform the automotive coatings industry. Acrylic paints are based on polyacrylate resins. These synthetic resins are produced by the polymerization of acrylic esters or acrylates, forming a durable plastic film. Like previous systems, the acrylates are dissolved within a hydrocarbon solvent and applied using spraying. However, unlike alkyd, acrylate polymerization occurs without surrounding oxygen, and in most production acrylic systems, is initiated with a catalyst based on isocyanates or melamines. Polyacrylate resins do not easily absorb radiation in the solar spectrum, giving them excellent resistance to UV degradation, when compared to previous resins. UNDERCOATS Since the inception of its use, most of these undercoats or primers were composed of a combination of alkyd and oleaginous resins to produce an interface coating. Initially these coatings were applied to individual panels through dip coating, though this would eventually evolve to a combination of dipping and spraying entire body assemblies. Because undercoats directly interface to the vehicle's base metal, they serve as the primary form of corrosion protection. However, the process by which they were applied resulted in inconsistent coverage throughout the vehicles. This was due to recesses and enclosed areas on the vehicle’s body. In the 1960s, Ford Motor Company would pioneer a dramatically different approach to vehicle priming through electrodeposition. The car body is coated on the production line by immersing the body in a tank containing the aqueous primer dispersion and subjecting it to a direct current charge. EPA By the end of the 1970s, the EPA had sought to reduce photochemically reactive hydrocarbon solvent discharges from industrial finish operations by introducing emission requirements that restricted finishes to be sprayed at a minimum volume solids content of 60%. CLEAR COAT This initiative led to a new approach to how automotive finishes were utilized, with specific functions of an automotive coating now being directly engineered into each layer. In the Late 1970s, the first wet-on-wet systems were developed that consisted of a thin base coat and a thicker clear coat. This separation of coating function now allowed for completely different chemistries to be employed between layers. Based on solvents composed of glycol ethers and water, these systems dramatically reduced hydrocarbon emissions and were generally high solid in nature, easily meeting EPA requirements . POLYURETHANES Modern automotive coatings overcome these limitations by using a hybrid dispersion of acrylics, polyurethane and even polyesters. These systems, known as acrylic-polyurethane enamels, incorporate the monomers of each resin in a proprietary combination that, once initiated by a catalyst, undergo polymerization. By adjusting the constituent resins and their quantities as well as the catalyst formulation, the sequence and rate of how this polymer network is formed can be modified, and the properties of the composite film adjusted to suit the needs of the product. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Image Format That Made The Internet

wW8GE9HyI8M | 04 May 2022

The Image Format That Made The Internet

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription The GIF format was introduced in 1987 by CompuServe, America's first major commercial online service provider. In early 1980s, advances in modem speed, processing power and the introduction of their CompuServe B file transfer protocol had now allowed for the exchange of graphics on the platform. This also opened the door to CompuServe's eventual transition to a GUI based interface. At the time, access to most online information services were billed by time and for graphics to be exchanged cost effectively and within a practical transfer time, the service required a method to reduce the memory requirements of informationally dense graphical data. PALETTES Because of this the concept of a palette or a color lookup table was introduced. A palette is a data structure that contains an indexable lookup table of chromaticity coordinates. In this mechanism, each pixel of the image data is defined by a palette table index, where its color data can be found. A 2-bit per pixel image for example can reference 4 color definitions within a palette while a 16 bit per pixel image can reference a little over 65k unique color definitions. IMAGE COMPRESSION This technique is known as lossy compression, as it alters the original image data in the compression process. While lossy compression can dramatically reduce memory requirements, the technique was far too processor intensive for consumer computer hardware of the time and its losseness made it unusable for functional graphics, such as in case of graphic user interfaces. Lossless image compression that did not change the image data was chosen for the JIF format as the available techniques were relatively simple and could operate easily on existing hardware. It also best matched the intended application of sharp-edged line art based graphics that used a limited number of colors, such as logos and GUI elements. RLE This allowed long runs of similar pixels to be compressed into one data element and it proved to be most efficient on image data that contained simple graphics such as with icons and line drawings. Because of the overhead of the counting mechanism, more memory is required than the original image in such cases, making the technique unusable for more complex images. LZW Wilhite had concluded that run length encoding was not an effective solution and looked towards a new class of data-compression algorithms, developed in the late 1970s by Jacob Ziv and Abraham Lempel. A key characteristic of LZW is that the dictionary is neither stored or transmitted but rather developed within the algorithm as the source data is encoded or compressed data decoded. ENCODING In the encoding process an initial code-width size is established for the encoded data. An 8-Bit based data source for example would require the first 256 dictionary indexes to be mapped to each possible 8-bit word value. From here, if more data is available in the source data stream the algorithm returns to its loop point. If there is no data left to encode, the contents of the remaining index buffer is found within the dictionary and its index code-word sent to the output code stream, completing the final encoding. DECODING A dictionary table matched to the bit-width specification of the encoded data is first initialized in a manner similar to the encoding process. Because the encoding process always starts with a single value, the first code-word read from the input code stream always references a single value within the dictionary, which is subsequently sent to the decoded output data stream. If one is found or the code-word represents a single value, the referenced values are sent to the decoded output data stream. VARIABLE BIT-DEPTH This requires the bit-width of a code-word to be, at minimum one bit larger than that of the image data. This is accomplished by starting the encoding process with the assumption that the bit-width of a code-word will be one bit larger than the image data’s bit-depth. This is the minimum needed to index every possible value of the image data plus the control codes. IMAGE LAYOUT Each image is contained within a segment block that defines its size, location on the canvas, an optional local color palette and the LZW encoded image data along with its starting code-word bit-width size. Each image can either use its own local color palette or the global color palette, allowing for the use of far more simultaneous colors than a single 8-bit palette would allow. The LZW encoded data within the image block are packaged into linked data sub-blocks of 256 bytes or less. GRAPHIC CONTROL EXTENSION The graphic control extension also took advantage of the format's ability to store multiple images with the introduction of basic animation. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Detecting Nuclear Detonations

ZCaiuGsTrjU | 31 Mar 2022

Detecting Nuclear Detonations

▶ Visit https://brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription While the mass use of strategic nuclear weapons is the ultimate terror of modern warfare, it represents the final stage of conflict escalation on the world stage. A more immediate threat comes from tactical nuclear weapons. Tactical nuclear weapons are generally considered low yield, short-ranged weapons designed for the use at the theater-level, alongside conventional forces. Both the US and Russia define a tactical nuclear weapon by it’s operational range. Their battlefield centric missions and perception as being less destructive encourage their forward-basing and can make the decision to use tactical nuclear weapons psychologically and operationally easier, potentially pushing a conflict into the realm of strategic nuclear escalation. Surveillance systems designed to detect these detonations must home in on the telling characteristics of a nuclear weapon. BHANGMETER As the very first nuclear weapon detonated, it was observed by both cameras and other optical instrumentation, that a peculiar double-peaked illumination curve of light was emitted from the bomb. It was soon determined from analyzing the fireball expansion phenomenon of the detonation, that two -millisecond range peaks of light were separated by a period of minimum intensity lasting from a fraction of a second to a few seconds, that corresponded to an atmospheric shockwave break away from the expanding front of the fireball. It took for the shockwave front to transition from opaque to transparent was directly correlated to the weapons yield. FIRST METERS In 1948, during the third series of American nuclear testing, called Operation Sandstone, the first purpose-built proof-of-concept device for specifically detecting nuclear detonations would be tested. While this device was simple and devised on site, it provided a measurement of light intensity over time using a photocell coupled to a cheap oscilloscope. During a meeting with the project group, Reines would coin the term Bhangmeter for the device. A calibration curve was developed from the average of these measurement devices and the testing weapon’s yield. From this data, the bhangmeter was able to optically determine a nuclear weapon’s yield to within 15%. Though blue light was used to produce this initial calibration data due to its higher contrast within the detonation, it was soon discovered that changing the observed spectrum of visible light also modified the amount of time it took for the light intently to start its initial drop off. During further tests it was also realized that the altitude of a bomb’s detonation could also be determined from analyzing the time-to-minimum light intensity as the duration of the initial fireball expansion was largely influenced by the effect the ground had on its shape. ADOPTION These aviation compatible, AC powered systems were specifically designed and deployed to monitor the Soviet test of Tsar Bomba, the largest nuclear weapon ever detonated. Around the same time, the first large scale nuclear detonation network would be deployed by the US and the UK. Linked by Western Union’s telegraph and telephone lines, the system was designed to report the confirmation of a nuclear double-flash before the sensors were destroyed by the detonation. The Bomb Alarm Display System was in use from 1961 to 1967 and while it offered adequate surveillance for the onset of nuclear war, the emergence of the Partial Nuclear Test Ban Treaty in 1963 now warranted the ability to monitor atmospheric nuclear testing at the global level. SATELITES The solution to the challenge of this new scope of nuclear detection came with Project Vela, a group of satellites developed specifically for monitoring test ban compliance. They could determine the location of a nuclear explosion to within about 3,000 miles, exceeding the positional and yield accuracy of the original system. GPS As the Vela program was being phased out in the mid 1980s, the task of specifically detecting nuclear detonations would become a part of the new global position system. Known as the GPS Nuclear Detonation Detection System, this capability took advantage of the extensive coverage of earth's surface offered by the constellation. These bursts propagate from a nuclear detonation in a spherical shell and by measuring their intensity against the accurate timing information of 4 or more satellites of the GPS constellation, these time differences of arrival can be used to calculate the position of the x-ray burst source. Each of the GPS satellites are equipped with a specialized antenna and support system to both detect and measure these EMP incidences. The Bhangmeters that complement the other sensors on the GPS constellation are the most sophisticated satellite based system to date. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Science Of Cardboard

PM1QMGPL79A | 03 Mar 2022

The Science Of Cardboard

▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription In 2020, the United States hit a record high in its yearly use of one the most ubiquitous manufactured materials on earth, cardboard. As of 2020, just under 97% of all expended corrugated packaging is recovered for recycling, making this inexpensive, durable, material an extraordinary recycling success story. THE RISE OF PAPER PACKAGING This processed pulp is then used to produce paper. Paper making machines use a moving woven mesh to create a continuous paper web that aligns the fibers held in the pulp, producing a continuously moving wet mat of fiber. The invention of several paper-based packaging forms and processes stemmed from this boom, with the corrugated fiberboard shipping container quickly becoming the most dominant. INVENTION OF CORRUGATION The first known widespread use of corrugated paper was in the 1850s with an English patent being issued in 1856 to Edward Charles Healey and Edward Ellis Allen. Three years later, Oliver Long would patent an improvement on Jone’s design with the addition of an adhered single paper facing to prevent the unfolding of the corrugation, forming the basis for modern corrugated fiberboard. American Robert Gair, a Brooklyn printer and paper-bag maker, had discovered that by cutting and creasing cardboard in one operation he could make prefabricated cartons. In a partnership with the Thompson and Norris company, the concept would be applied to double-faced corrugated stock, giving rise to the production of the first corrugated fiberboard boxes. In 1903, the first use of corrugated fiberboard boxes for rail transport occurred when the Kellog brothers secured an exception to the wooden box requirement by railroads of the Central Freight Association. HOW ITS MADE Rolls of paper stock are first mounted onto unwinding stands and are pulled into the machine at the feeding side of the corrugator, also known as the "wet end". The paper medium is heated to around 176-193 degrees C , so it can be formed into a fluted pattern at the corrugating rolls. The corrugating rolls are gear-like cylinders that are designed to shape the paper medium into a fluted structure as it moves through them. As the newly formed fluted paper leaves these rolls, an adhesive is applied to the flute tips and the first liner is roller pressed on. The paper stock that forms this liner is often pre-treated with steam and heat before this binding process. The adhesives used in modern corrugated fiberboard are typically water-based, food-grade, corn starches combined with additives. A second liner is applied by adding adhesive to the fluted tips on the other side of the paper medium. After curing, the sheets may be coated with molten wax to create a water-resistant barrier if the packaging is expected to be exposed to excessive amounts of moisture, such as with produce or frozen food products. PAPER SOURCE While the first packaging papers relied on the chemical based Kraft pulping process, modern production relies primarily on mechanical pulping, due to its lower cost and higher yield. When a production run of corrugated fiberboard is done, a target set of specifications based on customer requirements, determine both the quality control and physical properties of the fiberboard. BOXES Corrugated sheets are run through a splitter-scoring machine that scores and trims the corrugated stock into sheets known as box blanks. Within the flexographic machine, the final packaging product is created. Flexographic machines employ both printing dies and rotary die-cutters on a flexible sheet that are fitted to large rollers. Additionally, a machine known as a curtain coater is also utilized to apply a coat of wax for moisture-resistant packaging. RECYCLING The slurry is sent through an industrial magnet to remove metal contaminants. Chemicals are also applied to decolorize the mixture of inks within the slurry. Because the paper produced by purely recycled material will have a dull finish and poor wear characteristics, virgin pulp is typically blended into the slurry to improve its quality. This blended pulp is then directly used to produce new paper. Recycling paper based packaging is so effective that only 75% of the energy used to produce virgin paper packaging is needed to make new cardboard from recycled stock. Aside from diverting waste material from landfills, it requires both 50% less electricity and 90% less water to produce. KEY FOOTAGE Georgia-Pacific Corrugated Boxes: How It’s Made Step By Step Process | Georgia-Pacific https://youtu.be/C5nNUPNvWAw SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The World's Fastest Electric Airplane

GsXGJ1O3ccQ | 15 Jan 2022

The World's Fastest Electric Airplane

▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription THE SPIRIT OF INNOVATION On November 16, 2021, an experimental aircraft called the ‘Spirit of Innovation’, designed by Rolls Royce, would record an average speed of just under 556 km/h or 345mph over a 3km span. The Spirit of Innovation is the world's fastest, all electric aircraft. It superseded the previous record set by the Siemens eAircraft Extra 330 LE Aerobatic aircraft in 2017 by over 213 km/h or 132 mph, and it also climbed over 60 seconds faster to 3000 meters or about 10,000 ft. BUILDING THE AIRCRAFT The Lycoming engine was replaced by three electric motors and the fuel tank by three battery packs. Combined, the battery packs, motors and control equipment were similar in weight to the existing power plant, however this fully electric system was now capable of outputting around 530hp continuously and almost 1000hp in bursts. By comparison, in a conventional aircraft, the overall weight is reduced as the fuel is used up. To compensate for this, the aircraft was converted to a single-seater to reduce weight further, though at the cost of moving the center of gravity slightly forward. MOTOR Designing the propulsion unit for the Spirit Of Innovation was also another major hurdle for the ACCEL team. Not only must the electric motor be compact and powerful, but also possess a high degree of reliability and the ability to tolerate failures, for aviation use. Because no single electric motor was commercially available that would meet these requirements, the team decided on a propulsion configuration composed of a stack of 3 YASA 750R axial flux electric motors coupled by a single shaft running through them. Using 3 of these motors in tandem not only met the power requirements of the ACCEL team but it also offered redundancy against motor failure. While the entire triple motor system weighed just 111kg or about 244lbs, it was capable of generating around 750kw or 1000hp, though continuous total power was limited to around 210kw or about 280hp, due to thermal constraints. COOLING Unlike road going vehicles, aircraft require relatively larger amounts of continuous power simply to cruise. For an electric aircraft this creates safety concerns as the high wattage draw, combined with the density of the propulsion system’s packaging, generates significant amounts of heat. Within the battery pack, each individual cell was fitted with both voltage and temperature sensors. This robust sensor array not only drove the thermal management system, but also served as a safety mechanism, providing the pilot information on the health of the battery as well as alerting to potential failure conditions. In the event of catastrophic battery failure, the thermally insulated containing structure is designed to be fireproof, making use of a purging mechanism that maintains an inert argon atmosphere within it. FIRST FLIGHT Over the next few weeks, around 30, 15 minute flights were conducted, with each gradually increasing flying speeds as the functionality of the propulsion system was validated. As speeds increased, Rolls-Royce had to make use of a Spitfire as the chase plane to keep up with the electric aircraft. While O’Dell described the aircraft as being not very different to fly than the existing aircraft he was familiar with, it’s electric propulsion system did present some unique characteristics. In between flights, the battery packs could be recharged individually within an hour, though this was primarily limited by the electrical infrastructure that the team operated from. RESULTS Unlike electric cars, high voltage batteries in aviation use are at higher risk for arcing from corona effect due to the lower air density at higher altitudes. Future electric aircraft designs may need to adopt a staged operating voltage system that can balance the risk of arcing with the aircraft’s size, power requirements and altitude. Additionally, electric aircraft subject their batteries to far more frequent full discharges than road electric vehicles, which drastically affect battery life. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Portable Nuclear Power

XSzzNY20zLY | 22 Nov 2021

Portable Nuclear Power

▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription NUCLEAR POWER Of all the power sources available to man, none has been as extraordinary in energy yield as nuclear fission. In fact, a single gram of fissile nuclear fuel, in theory, contains as much free energy as the gasoline contained within a small fuel tanker truck. By the early 2000s, concerns over carbon dioxide emissions would bring about a renewed interest in nuclear power. And with this, came a myriad of developments that aimed at improving the safety and sustainability of large scale reactors. However, in recent years, a new paradigm in how nuclear fission reactors are created and utilized is starting to gain momentum. NUCLEAR FISSION To date, almost all nuclear power reactors extract energy from the process of nuclear fission. In this process, a fissile nuclear fuel is bombarded with neutrons. As the nucleus of the fuel’s atoms captures a neutron by the strong nuclear force, it begins to deform resulting in the nucleus fragments exceeding the distances at which the strong nuclear force can hold the two groups of charged nucleons together. This tug of war between the strong nuclear force and the electromagnetic force ends with the two fragments separating by their repulsive charge. Because fission reactions are primarily driven by bombardment, establishing and regulating a sustained fission chain reaction becomes feasible through controlling the free neutron movement within a reactor. This characteristic allows for fission reactions to be "throttled", making it well suited for electric power generation. FIRST REACTORS The first practical nuclear reactor was developed during the early 1950s by the U. Known as the S1W reactor, it would see its first deployment on the USS Nautilus in January 1954. The S1W was a relatively simple and compact design known as a pressurized water reactor. The fission chain reaction can also be throttled by introducing neutron absorbers into the reactor core. IMPROVEMENTS ON REACTOR DESIGN Within a decade, the two circuit designs of pressurized water reactors would be reduced to a single loop configuration with the introduction of boiling water reactors. Designed primarily with civilian power generation in mind, a boiling water reactor directly produces steam by heating cooling water with the reactor core. This steam is then directly used to drive a turbine, after which it is cooled in a condenser and converted back to liquid water, and pumped back into the reactor core. Boiling water reactors still utilized water as the neutron moderator and chain reaction throttling via control rods or blades was also retained. GAS REACTORS In gas cooled reactors, an inert gas is used to transfer heat from the reactor core to a heat exchanger, where steam is generated and sent to turbines. Neutron moderation is accomplished by encasing the nuclear fuel in either graphite or heavy water. The effectiveness of how they moderate neutrons also permits the use of less-enriched uranium, with some reactors being able operate purely on natural uranium. PEBBLE-BED REACTORS These thin, solid layers are are composed of a 10 micron porous inner carbon layer that contains the fission reaction products, a neutron moderating, and protective 40 micron pyrolytic carbon inner-layer, a 35 micron silicon carbide ceramic layer to contain high temperature fission products and add structure to the particle, and another protective 40 micron pyrolytic carbon outer later. TRISO fuel is incredibly robust and resilient. They can survive extreme thermal swings without cracking as well as the high pressures and temperatures of fission cooling systems. Gas cooled reactors work especially well with TRISO fuel because of their ability to operate at high temperatures while remaining chemically inert. When combined with TRISO fuel, they also offer incomparable levels of nuclear containment. SMRs SMRs are nuclear reactors of relatively small power generation capacity, generally no larger than 300 MW. They can be installed in multi-reactor banks to increase plant capacity and they offer the benefit of lower investment costs and increased safety through containment. PROJECT PEELE Called Project Peele, the program is planned around a two year design-maturation period where a generation IV reactor design will be adapted to small scale, mobile use. X-Energy, in particular, has promoted TRISO pebble bed technology as the ideal choice for such a rugged reactor design. In addition, the full-scale deployment of Fourth Generation nuclear reactor technologies will have significant geopolitical implications for the United States while reducing the Nation’s carbon emissions. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The Modem: Building The Internet With Sound

kaIZ6j0mAfU | 12 Oct 2021

The Modem: Building The Internet With Sound

▶ Visit brilliant.org/NewMind to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription THE INTERNET ARPANET was initially created to facilitate communications among government agencies and research institutions. The civilian ARPANET would eventually be migrated to a more modernized parallel network called NSFNET. Around this time, the restrictions on the commercial use of NSFNET would be lifted and with it came the emergence of the commercial internet service provider industry. This shift to commercialization became the catalyst for a massive influx of money, technical advancement, and the proliferation of access that transitioned the early internet from the military’s technological marvel to the massive communications apparatus that infiltrates every aspect of our lives today. BAUD RATE The baud unit’s definition would be revised and redefined formally in 1926, to represent the number of distinct symbol changes made to a transmission medium per second. THE FIRST MODEMS Derived from the term modulator-demodulator, a modem converts digital data into a signal that is suitable for a transmission medium. A year later, a commercial variant of the SAGE modem would be introduced to the public as the Bell 101 Dataset. FSK In 1962, the underlying technology of the modem would split from that of teleprinters with the introduction of the Bell 103 dataset by AT&T. Because the Bell 103 was now fully electronic, a new modulation method was introduced that was based on audio frequency-shift keying to encode data. In frequency shift keying a specific transmitted frequency is used to denote a binary state of the transmission medium. By the mid 1970s, the baud rate of frequency shift keying modems would be pushed even higher with the introduction of 600 baud modems that could operate at 1200baud when used in one directional communication, or half-duplex mode. HAYES SMARTMODEM The Smartmodem introduced a command language which allowed the computer to make control requests that included telephony commands, over the same interface used for the data connection. The mechanics allowed the modem to switch between command mode and data mode by transmitting an escape sequence of 3 plus symbols. From this, the Hayes smart modem quickly grew in popularity during the mid 1980s, inherently making the command set used by it, the Hayes command set, the de facto standard of modem control. QAM As the early 1980s progressed, manufacturers started to push their modem speeds past 1200 bps. In 1984, a new form of modulation called quadrature amplitude modulation would be introduced to the market. Quadrature amplitude modulation is an extension of phase shift keying that adds additional symbol encoding density per baud unit, by overlapping amplitude levels with phase states. The first modem standard to implement quadrature amplitude modulation was ITU V. 22bis employed a variation of the modulation, known as 16-QAM to encode 16 different symbols, or 4 bits of data within each baud unit, using a combination of 3 amplitude levels and 12 phases. TRELLIS Trellis code modulation differs dramatically from previous modulation techniques, in that it does not transmit data directly. A state machine based algorithm is then used to encode data into a stream of possible transitions between branches of the partition set. This transition data is used to recreate all possible branch transitions in a topology that is similar to a trellis. From this, using a predetermined rule for path selection, the most likely branch transition path is chosen and used to recreate the transmitted data. HIGH SPEED MODEMS By 1994, baud rates would be increased to 3,429 symbols per second with up to 10 bits per symbol encoding now becoming possible. The dramatic boost in data rates created by TCM directly changed the look and feel of the growing internet. 56K In early 1997, the modem would get one last boost in bitrate with the introduction of the first 56k dial-up modems. Pushing speeds above 33.6kps proved to be extraordinarily challenging as that process that digitized telephone audios signals for routing by telecommunications infrastructure made it very difficult for denser data transmissions to survive the digitizing process. This difficulty led modem manufacturers to abandon pushing analog-end bitrate speeds higher. Initially there were two competing standards for 56k technology, US Robotics' X2 modem and the K56Flex modem developed by Lucent Technologies and Motorola. Both competing products began to permeate the market at the beginning of 1997, and by October nearly 50% of all ISPs within the United States supported some form of 56k technology. V.90 merged the two competing designs into an entirely new standard that would receive strong industry support. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The True Cost Of Bitcoin

qfGY9n85cDc | 13 Mar 2021

The True Cost Of Bitcoin

The power consumption of the Bitcoin network is a direct result of the mechanism by which it establishes trust to its participants. Bitcoin was developed to create a decentralized electronic currency system, and since the direct transfer of assets is not plausible electronically, information now becomes the store of value. But unlike the traditional concept of a commodity, there is no actual definition of an object that represents a Bitcoin. Rather, the network operates on a ledger that is accepted by all participants. This ledger contains the entire transaction history of the Bitcoin network, representing the changes of ownership, of amounts of a definition less entity called Bitcoin, since its genesis in 2009. This shared ledger is maintained by thousands of computers worldwide, called nodes, operating on a peer-to-peer network. Each node keeps a separate copy of the entire ledger and combined they form the public permission less voting system that validates every transaction. When a transaction occurs, the sender’s balance from a previous Bitcoin transaction is transferred to one or more public recipients of an asymmetric cryptographic key pair. Once a transaction is created, it’s sent to the closest node where it is subsequently distributed throughout the network. A special transaction, known as a block reward is also added to the block, as an incentive mechanism for miners to build upon the network by block creation. Each mining node can independently choose which transactions will appear in a new block, but only one can earn the authority to add their block to the chain of existing blocks, that every participant on the network accepts as the Bitcoin blockchain. Finding this hash is called proof of work or PoW. Once a valid hash is found, the new block is broadcasted to the rest of the network where it is validated and added to each node’s copy of the blockchain. As of February 2021, it takes roughly 90 sextillion hash guesses to create a valid bitcoin block. This dramatic rise in the needed computational power is a direct result of an inbuilt mechanism of the bitcoin network that raises or lowers the lead zero count requirement of a block hash, in order to keep the average creation time of a new block to around 10 minutes. At its inception, each bitcoin block created, rewarded 50 bitcoin. As of February 2021, one block reward is worth 6.25 bitcoin. POWER CONSUMPTION As the value of bitcoin rises, miners collectively unleash more computing power at the network to capitalize from the higher prices. This inherently forces the network difficulty to increase and eventually, an equilibrium is reached between the profitability of mining and network difficulty. And within this feedback loop that regulates the network, lies a key link between bitcoin and a real-world resource, power consumption. As of February 2021, the total network hash rate has hovered around one hundred fifty quintillion block hashes calculated every second, globally. In this case, the total hash rate of the network can be said to be 150 million TH/s. Because these devices are the most efficient miners on the network, it sets the theoretically lower limit of energy consumption at 4.5 gigajoules per second or about 40 terawatt-hours per year if the current total hash rate is maintained. This approach assumes that all mining participants in the network aim to make a profit and that all new Bitcoins produced by mining, must at least, be higher on average to the\an operating costs of mining. SCALE OF POWER Even the most conservative estimate of the network ranks it as high as the 56th most power-consuming country in the world, New Zealand. At the current best estimate of the network’s power consumption levels, each bitcoin transaction took around 700-kilowatt-hours to process. Further compounding bitcoin's power consumption issues is the fact that mining hardware must run continuously to be effective. This makes it difficult to employ excess power generation strategically for mining use, effectively making mining consumption a baseline power demand on infrastructure. In fact, it’s estimated that China single-handedly operates almost 50% of the bitcoin network, with the nation of Georgia following in second with a little over 25% of all mining, and the US in 3rd place with 11.5%. The annual carbon footprint of the bitcoin network is estimated to be around 37 Mt of carbon dioxide. FUTURE Alternative consensus mechanisms, like proof of stake (PoS), have been developed to address the power consumption associated with proof of work. Many experts still warn that in its current growth trajectory, it is simply unsustainable for bitcoin to become a global reserve currency as this would require the network to consume a significant portion of all energy produced globally. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind​​ SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The AI Hardware Problem

owe9cPEdm7k | 13 Feb 2021

The AI Hardware Problem

▶ Check out Brilliant with this link to receive a 20% discount! https://brilliant.org/NewMind/ The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures. THE MAC In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself. GPUs Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers. ASICS These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit- precision further. LOW POWER AI Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math. These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network. As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks. If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind​ SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel​

Pulling Energy Out Of Thin Air

UK8Fw5Zjna0 | 24 Jan 2021

Pulling Energy Out Of Thin Air

▶ Check out Brilliant with this link to receive a 20% discount! https://brilliant.org/NewMind/ During the middle ages, the concept of the perpetual motion machine would develop. The first law, known as the Law of Conservation of Energy, would prohibit the existence of a perpetual motion machine, by preventing the creation or destruction of energy within an isolated system. MAXWELL’S DEMON In 1867 James Clerk Maxwell, the Scottish pioneer of electromagnetism, conceived of a thermodynamic thought experiment that exhibited a key characteristic of a thermal perpetual motion machine. Because faster molecules are hotter, the "beings" actions cause one chamber to warm up and the other to cool down, seemingly reversing the process of a heat engine without adding energy. ENTROPY Despite maintaining the conservation of energy, both Maxwell’s demon and thermal perpetual motion machines, contravened, arguably one of the most unrelenting principles of thermodynamics. This inherent, natural progression of entropy towards thermal equilibrium directly contradicts the behavior of all perpetual motion machines of the second kind. BROWNIAN MOTION In 1827, Scottish botanist Robert Brown, while studying the fertilization of flowering plants, began to investigate a persistent, rapid oscillatory motion of microscopic particles that were ejected by pollen grains suspended in water. Called Brownian motion, this phenomenon was initially attributed to thermal convection currents within the fluid. However, this would soon be abandoned as it was observed that nearby particles exhibited uncorrelated motion. Furthermore, the motion was seemingly random and occurred in any direction. These conclusions had led Albert Einstein in 1905 to produce his own quantitative theory of Brownian motion. And within his work, Brownian motion had indirectly confirmed the existence of atoms of a definite size. Brownian motion would tie the concepts of thermodynamics to the macroscopic world. BROWNIAN RATCHET In 1900, Gabriel Lippman, inventor of the first color photography method, proposed an idea for a mechanical thermal perpetual motion machine, known as the Brownian ratchet. The device is imagined to be small enough so that an impulse from a single molecular collision, caused by random Brownian motion, can turn the paddle. The net effect from the persistent random collisions would seemingly result in a continuous rotation of the ratchet mechanism in one direction, effectively allowing mechanical work to be extracted from Brownian motion. BROWNIAN MOTOR During the 1990s, using Brownian motion to extract mechanical work would re-emerge in the field of Brownian motor research. Brownian motors are nanomachines that can extract useful work from chemical potentials and other microscopic nonequilibrium sources. In recent years, they’ve become a focal point of nanoscience research, especially for directed-motion applications within nanorobotics. ELECTRICAL BROWNIAN MOTION In 1950, french physicist Léon Brillouin proposed an easily constructible, electrical circuit analog to the Brownian ratchet. Much like the ratchet and pawl mechanism of the Brownian ratchet, the diode would in concept create a "one-way flow of energy", producing a direct current that could be used to perform work. However, much like the Brownian ratchet, the "one-way" mechanism once again fails when the entire device is at thermal equilibrium. GRAPHENE - https://journals.aps.org/pre/abstract/10.1103/PhysRevE.102.042101 In early 2020, a team of physicists at the University of Arkansas would make a breakthrough in harvesting the energy of Brownian Motion. Instead of attempting to extract energy from a fluid, the team exploited the properties of a micro-sized sheet of freestanding graphene. At room temperature graphene is in constant motion. The individual atoms within the membrane exhibit Brownian motion, even in the presence of an applied bias voltage. The team created a circuit that used two diodes to capture energy from charge flow created by the graphene’s motion. In this state, the graphene begins to develop a low-frequency oscillation that shifts the evenly distributed power spectrum of Brownian motion to lower frequencies. The diodes had actually amplified the power delivered, rather than reduce it, suggesting that electrical work was done by the motion of the graphene despite being held at a single temperature. Despite contradicting decades of philosophical analysis, the team behind this experiment concluded that while the circuit is at thermal equilibrium, the thermal exchange between the circuit and its surrounding environment is in fact powering the work on the load resistor. Graphene power generation could be incorporated into semiconductor products, providing a clean, limitless, power source for small devices and sensors. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

Of Fractals And Drones

QNk0ECGFVVs | 21 Dec 2020

Of Fractals And Drones

▶ ️ Special Christmas deal! Every purchase of a 2-year plan will get you 4 additional months free. Go to https://nordvpn.com/newmind and use our coupon "newmind" at checkout DESCRIPTION The story of technology is one of convergence. It is ideas applied; forged from multiple disciplines, all coinciding at the right place and at the right time. This video is an account of a tiny sliver of that story. Where a novel concept, born out of the explosion of discovery at the turn of the 20th century, would slowly gravitate towards a problem that lied a century away. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel FOR MORE INFORMATION ON NordVPN https://www.youtube.com/nordvpn

The Science Of Boost

gN10vCR-tPY | 02 Dec 2020

The Science Of Boost

By design, reciprocating engines are air pumps. They compress the aspirated air-fuel charge, ignite it, convert this expansion of hot gases into mechanical energy, and then expel the cooler, lower pressure gases. The amount of energy converted is determined by the pressure exerted on its pistons by combustion and the length of its expansion cycle. By increasing how aggressively a given mass of air-fuel charge is compressed, higher combustion pressures are achieved, allowing more energy to be extracted and thus creating more mechanical power output. ROOTS SUPERCHARGER In 1859 two brothers Philander Higley Roots and Francis Marion Roots founded The Roots Blower Company in Connersville, Indiana. Roots superchargers operate by pumping air with a pair of meshing lobes resembling a set of stretched gears. The incoming air is trapped in pockets surrounding the lobes and carried from the intake side to the exhaust of the blower. TWIN-SCREW SUPERCHARGERS In 1935, Swedish engineer Alf Lysholm patented a new air pump design as well as a method for its manufacture that improved upon the limitations of Roots blowers. Lysholm had replaced the lobes with screws, creating the rotary-screw compressor. CENTRIFUGAL SUPERCHARGERS... INTERCOOLERS Forcing more air into a cylinder with boost easily creates more power in an engine by increasing the air mass of the intake charge beyond what is possible with natural aspiration. This also inherently pushes volumetric efficiency well beyond 100% Because forced induction occurs outside of the engine the properties of the air mass can be further enhanced by cooling, by passing the compressed air through a heat-exchange device known as an intercooler. TURBOCHARGERS In some extreme cases, it can take as much as ⅓ of the base engine's power to drive the supercharger to produce a net gain in power. The first turbocharger design was patented in 1905 by Swiss Engineer Alfred Büchi. He had conceptualized a compound radial engine with an exhaust-driven axial flow turbine and compressor mounted on a common shaft. Turbochargers work by converting the heat and kinetic energy contained within engine exhaust gases, as they leave a cylinder. Radial inflow turbines work on a perpendicular gas flow stream, similar to a water wheel. This shaft is housed within the center section of a turbocharger known as the center hub rotating assembly. Not only must it contain a bearing system to suspend the shaft spinning at 100,000s of RPMs, but it must also contend with the high temperatures created by exhaust gases. In automotive applications, the bearing system found in most turbochargers are typically journal bearings or ball bearings. Of the two, journal bearings are more common due to its lower costs and effectiveness. It consists of two types of plain bearings; cylindrical bearings to contain radial loads and a flat thrust bearing to manage thrust loads. Turbine aspect ratio - This is the ratio of the area of the turbine inlet relative to the distance between the centroid of the inlet and the center of the turbine wheel. Compressors Trim -This is the relationship between the compressor wheels’ inducer and exducer diameter. WASTEGATES In order to prevent safe pressures and speeds from being exceeded, a mechanism called a wastegate is employed. Wastgates work by opening a valve at a predetermined compressor pressure that diverts exhaust gases away from the turbine, limiting its rpm. In its most common form, wastegates are integrated directly into the turbine housing, employing a poppet type valve. The valve is opened by boost pressure pushing a diaphragm against a spring of a predetermined force rating, diverting exhaust gases away from the turbine. BLOW OFF VALVES On engines with throttles, such as gasoline engines, a sudden closing of the throttle plate with the turbine spinning at high speed causes a rapid reduction in airflow beyond the surge line of the compressor. A blow-off valve is used to prevent this. MULTI-CHARGING Twincharging started to appear in commercial automotive use during the 1980s, with Volkswagen being a major adopter of the technology. In its most common configuration, a supercharger would feed directly into a larger turbocharger. TWIN-SCROLL TURBOCHARGER Twin-scroll turbochargers have two exhaust gas inlets that feed two gas nozzles. One directs exhaust gases to the outer edge of the turbine blades, helping the turbocharger to spin faster, reducing lag, while the other directs gases to the inner surfaces of the turbine blades, improving the response of the turbocharger during higher flow conditions. VARIABLE GEOMETRY Variable-geometry turbochargers are another example of turbocharger development. They generally work by allowing the effective aspect ratio of the turbocharger’s turbine to be altered as conditions change. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

Boring Through The Earth's Crust

wzcVT0oZJkg | 21 Oct 2020

Boring Through The Earth's Crust

Over the course of the 1960s into the 80s, several interdisciplinary geoscientific research projects such as the Upper Mantle Project, the Geodynamics Project, and the Deep Sea Drilling Project, contributed significantly to a better understanding of the earth's structure and development. In the 1960s, several prominent research organizations such as the National Academy of Sciences and the Ministry of Geology of the USSR initiated exploratory programs that used deep drilling to study the internals of the Earth. The aim of the program was to develop a model of the Earth’s crust and upper mantle, as well as new methods for forecasting mineral deposits. It developed a fundamentally new technical approach to the study of the deep structure of the Earth’s crust and upper mantle, based on a combination of seismic depth-sensing, deep drilling data, and other geophysical and geochemical methods. These studies resulted in technologies that advanced both super-deep drilling and geological logging, in boreholes over 10 km deep. DRILLING In cable-tool drilling each drop would transmit force through a series of heavy iron drilling columns known as strings, driving a variety of bits deep into the borehole. Rotary drills utilized a hollow drill stem, enabling broken rock debris to be circulated out of the borehole, along with mud, as the rotating drill bit cut deeper. PROJECT MOHOLE The project’s goal was to drill through the Earth’s crust to retrieve samples from the boundary between the earth's crust and the mantle, known as the Mohorovicic discontinuity or Moho. Planned as a multi-hole, three-phase project, it would ultimately achieve a drill depth of 183 meters under the pacific seafloor, at a depth of 3.6 km. Despite Project Mohole’s failure in achieving its intended purpose, it did show that deep-ocean drilling was a viable means of obtaining geological samples. USSR'S RESPONSE The Kola Superdeep Borehole had a target depth set at 15,000 meters, and in 1979, it had surpassed the 9,583-meter vertical depth record held by the Bertha Rogers hole, a failed oil-exploratory hole drilled in Washita County, Oklahoma, in 1974. By 1984, the Soviets had reached a depth of over 12,000 meters. Drilling would later restart from 7,000 meters. Finally, in 1989, after grinding through crystalline rock for more than half its journey, the drill bit reached a final reported depth of 12,262 meters, the deepest artificial point on Earth. Though this fell short of the projected 1990 goal of 13,500 meters, drilling efforts continued despite technical challenges. However, in 1992, the target of 15,0000 meters was eventually deemed impossible after the temperatures at the hole’s bottom, previously expected to reach only 100 degrees C, was measured at over 180 degrees C. Ultimately, the dissolution of the Soviet Union led to the abandonment of the borehole in 1995. The Kola Superdeep Borehole was surpassed, in length only, by the slant drilled Al Shaheen oil well in Qatar, which extended 12,289 meters, though with a horizontal reach of 10,902 meters. HOW IT WAS DRILLED A 215mm diameter bit was rotated by a downhole turbine that was powered by the hydraulic pressure of ground-level mud pumps. A downhole instrument consisting of a generator, a pulsator, and a downhole measuring unit that measures the navigation and geophysical parameters were fitted to the drill. The pulsator converts the measured data into pressure pulses that propagate through the fluid barrel in the drilling tool and are received by pressure sensors at the surface. At the surface, the signal received by pressure sensors is sent to the receiving device, where it is amplified, filtered, and decoded for control and recording use. The downhole instrument is powered by the generator, which uses the movement of flushing fluid as a power source. WHAT WAS FOUND Rock samples taken from the borehole exposed cycles of crust-building that brought igneous rock into the crust from the mantle below. Additionally, one of the primary objectives of the Kola well was to penetrate through the upper layer of granite into the underlying basaltic rock. Even more astonishing, was the discovery of a subterranean layer of marine deposits, almost 7,000 meters beneath the surface, that were dated at two billion years old, and contained the fossil traces of life from 24 different species of plankton. Similar projects have taken place since the drilling of the Kola Superdeep borehole. One such notable example was the German Continental Deep Drilling Program, which was carried out between 1987 and 1995, reaching a depth of over 9,000 meters and using one of the largest derricks in the world. From this, the drilling project San Andreas Fault Observatory at Depth or SOFAD was formed in 2002. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The Evolution Of CPU Processing Power Part 4: The 32 Bit Processor - Pipelines and Caches

qRbDWkOE63I | 01 Oct 2020

The Evolution Of CPU Processing Power Part 4: The 32 Bit Processor - Pipelines and Caches

SERIES LINK - https://www.youtube.com/playlist?list=PLC7a8fNahjQ8IkiD5f7blIYrro9oeIfJU The rapid expansion of software from simple text-based tools to massively complex, feature-rich, highly visual products would dominate the mass-market computing world during the 1980s and 90s. And with this push, came a higher demand on processors to both efficiently utilize more memory and grow in computing power, all while keeping costs at consumer accessible levels. RISE OF 32-BIT During the mid-1980s, in response to the growing demands of software, the opening moves towards the mainstream adoption of 32-bit processor architecture would begin. While 32-bit architectures have existed in various forms as far back as 1948, particularly in mainframe use, at the desktop level only a few processors had full 32-bit capabilities. Produced in speeds ranging from 12Mhz to 33Mhz, the 68020 had 32 bit internal and external data buses as well as 32-bit address buses. It’s arithmetic logic unit was also now natively 32-bit, allowing for single clock cycle 32-bit operations. One year later, Intel would introduce its own true 32-bit processor family, the 80386. Not only did it offer a new set of 32-bit registers and a 32-bit internal architecture, but also built-in debugging capabilities as well as a far more powerful memory management unit, that addressed many of the criticisms of the 80286. This allowed most of the instruction set to target either the newer 32-bit architecture or perform older 16-bit operations. With 32-bit architecture, the potential to directly address and manage roughly 4.2 GB of memory proved to be promising. This new scale of memory addressing capacity would develop into the predominant architecture of software for the next 15 years. On top of this, protected mode can also be used in conjunction with a paging unit, combining segmentation and paging memory management. The ability of the 386 to disable segmentation by using one large segment effectively allowed it to have a flat memory model in protected mode. This flat memory model, combined with the power of virtual addressing and paging is arguably the most important feature change for the x86 processor family. PIPLINING CPUs designed around pipelining can also generally run at higher clock speeds due to the fewer delays from the simpler logic of a pipeline’s stage. The instruction data is usually passed in pipeline registers from one stage to the next, via control logic for each stage. Data inconsistency that disrupts the flow of a pipeline is referred to as a data hazard. Control hazards are when a conditional branch instruction is still in the process of executing within the pipeline as the incorrect branch path of new instructions are being loaded into the pipeline. One common technique to handle data hazards is known as pipeline bubbling. Operand forwarding is another employed technique in which data is passed through the pipeline directly before it’s even stored within the general CPU logic. In some processor pipelines, out-of-order execution is use to helps reduce underutilization of the pipeline during data hazard events. Control hazards are generally managed by attempting to choose the most likely path a conditional branch will take in order to avoid the need to reset the pipeline. CACHING In caching a small amount of high-speed static memory, is used to buffer access to a larger amount of lower-speed but less expensive, dynamic memory. A derived identifier, called a tag, that points to the memory region the block represents, amongst all possible mapped regions it can represent, is also stored within the cache block. While simple to implement, direct mapping creates an issue when two needed memory regions compete for the same mapped cache block. When an instruction invokes memory access, the cache controller calculates the block set the address will reside in and the tag to look for within that set. If the block is found, and it is marked as valid, then the data requested is read from the cache. This is known as a cache hit and it is the ideal path of memory access due to its speed. If the address cannot be found within the cache then it must be fetched from slower system memory. This is known as a cache miss and it comes with a huge performance penalty as it can potentially stall an instruction cycle while a cache update is performed. Writing data to a memory location introduces its own complication as the cache must now synchronize any changes made to it with system memory. The simplest policy is known as a write-through cache, where data written to the cache is immediately written to system memory. Another approach known as write-back or copy-back cache, tracks written blocks and only updates system memory when the block is evicted from the cache by replacement. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

Extreme Materials

3a8uOXIPhNI | 09 Aug 2020

Extreme Materials

CHECK OUT [Bony Right] The Impact Of Cell Towers - https://youtu.be/BcILeG62NjU DESCRIPTION - Superalloys - They also possess excellent mechanical strength and resistance to thermal creep or a permanent deformation under constant load at high temperatures. Additionally, they offer good surface stability and excellent resistance to oxidation. Superalloys achieve their high-temperature strength through an alloying process known as solid solution strengthening where the solute atom is large enough that it can replace solvent atoms in their lattice positions while leaving the overall crystal structure relatively unchanged. The casting process is especially important in the production of heat-resistant superalloys such as those used in aircraft engine components. - Aggregated Diamon Nanorods - Some materials resist this deformation and break very sharply, without plastic deformation, in what is called a brittle failure. The measure of a material’s resistance to deformation, particularly in a localized manner is its hardness. Diamonds have always been the standard for hardness, being the hardest material known to man. X-ray diffraction analysis had indicated that ADNRs are 0.3% denser than standard diamonds, giving rise to their superior hardness. Testing performed on a traditional diamond with an ADNR tip produced a hardness value of 170 GPa. Still, it’s speculated that ADNR’s hardness on the Mohs scale could exceed 10, the rating of a diamond. - Delo Monopox VE403728 - The way we utilize the properties of materials tends to occur in plain sight. Adhesives by definition are any non-metallic substance applied to one or both surfaces of two separate materials that bind them together and resist their separation. Sometimes referred to as glues or cement, they are one of the earliest engineering materials used by man. The lap shear strength is reported as the failure stress in the adhesive, which is determined by dividing the failing load by the bond area. For comparison, a single 6mm spot weld found on the chassis of most cars typically has a lap shear strength of 20Mpa. This substance is estimated to have a shear strength of around 60 Mpa, approaching the strength of a soldered copper joint. - B. A. M. - How easily two materials slide against each other is determined by their coefficient of friction, a dimensionless value that describes the ratio of the force of friction between two objects and the force pressing them together. Most dry materials, against themselves, have friction coefficient values between 0.3 and 0.6. Aside from its hardness, its unique composition exhibited the lowest known coefficients of friction of dry material, 0.04 and it was able to get as low as 0.02 using water-glycol-based lubricants. BAM is so slippery that a hypothetical 1kg block coating in the material would start sliding down an inclined plane of only 2 degrees. - Upsalite - Similar to how the slipperiest material was discovered, the most absorbent material would also be accidentally discovered in 2013, by a group of nanotechnology researchers at Uppsala University. While pursuing more viable methods for drug delivery using porous calcium carbonate, the team had accidentally created an entirely new material thought for more than 100 years to be impossible to make. This material, mesoporous magnesium carbonate or Upsalite, is a non-toxic magnesium carbonate with an extremely porous surface area, allowing it to absorb more moisture at low humidities than any other known material. Each nanopore is less than 10 nanometers in diameter which results in one gram of the material having 26 trillion nanopores, making it very reactive with its environment. This characteristic gives it incredible moisture absorption properties, allowing it to absorb more than 20 times more moisture than fumed silica, a material commonly used for moisture control during the transport of moisture sensitive goods. - Chlorine Trifluoride - Chlorine trifluoride is a colorless, poisonous, corrosive, and extremely reactive gas. In fact, it is so reactive that it is the most flammable substance known. First prepared in 1930 by the German chemist Otto Ruf, it was created by the fluorination of chlorine then separated by distillation. Because chlorine trifluoride is such a strong oxidizing and fluorinating agent it will react with most inorganic and organic materials, and will even initiate the combustion with many non-flammable materials, without an ignition source. Its oxidizing ability even surpasses oxygen, allowing it to react even against oxide-containing materials considered incombustible. It has been reported to ignite glass, sand, asbestos, and other highly fire-retardant materials. It will also ignite the ashes of materials that have already been burned in oxygen. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

4K 60 FPS Footage Of The First Flying Machines 1890-1910

a_G1YbItY9o | 25 Jul 2020

4K 60 FPS Footage Of The First Flying Machines 1890-1910

This is AI colored and upscaled compilation of footage from the early days of aviation where dangerous, bizarre contraptions attempted to take to the sky long before an understanding of aerodynamics. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

Head Transplants

qGrK8hg2j48 | 15 Jul 2020

Head Transplants

CHECK OUT [Bony Right] What Is A Virus - https://youtu.be/bNY71vmeSs4 DESCRIPTION In 1894, the assassination of the French President Marie Francois Sadi Carnot by an Italian Anarchist triggered a chain of events that would lead to some of the most remarkable breakthroughs in surgical medicine of the 20th century. Carnot ultimately succumbed to his knife wound due to the severing of his portal vein. At the time surgeons had no technique that could successfully reconnect blood vessels. This left a lasting impression on a young french surgeon named Alexis Carrel, who would ultimately go on to develop new techniques for suturing blood vessels.Interest in head transplantation started early on in modern surgery though it would take Alexis Carrel’s breakthrough in the joining of blood vessels or vascular anastomosis to make the procedure feasible. In 1908, Carrel and American physiologist, Dr. Charles Guthrie, performed the first attempts at head transplantation with two dogs. They attached one dog’s head onto another dog’s neck, connecting arteries in such a way that blood flowed first to the decapitated head and then to the recipient’s head. The decapitated head was without blood flow for about 20 min during the procedure, and while the transplanted head demonstrated aural, visual, and cutaneous reflex movements early after the procedure, its condition soon deteriorated and it was euthanized after a few hours. Throughout the 1950s and 60s advances in immunosuppressive drugs and organ transplantation techniques offered new tools and methods to overcome some of the challenges faced by previous head transplantation attempts. In 1965, Robert White, an American neurosurgeon, began his own controversial research. However, unlike Guthrie and Demikhov who focused on head transplantation, White’s goal was to perform a transplant of an isolated brain.In order to accomplish this challenging feat, he developed new perfusion techniques which maintained blood flow to an isolated brain. White created vascular loops to preserve the blood vessels and blood flow between the internal jaw area and the internal carotid arteries of the donor dog. This arrangement was referred to as "auto-perfusion" in that it allowed for the brain to be perfused by its own carotid system even after being severed at the second cervical vertebral body. Deep hypothermia was then induced on the isolated brain to reduce its function, and it was then positioned between the jugular vein and carotid artery of the recipient dog and grafted to the cervical vasculature. It would not be until 45 years later for the next major breakthrough in head transplantation to occur. In 2015, using mice, the Chinese surgeon Xio-Ping Ren would improve upon the methods used by Robert White by utilizing a technique in which only one carotid artery and the opposite jugular vein were cut, allowing the remaining intact carotid artery and jugular vein to continuously perfused the donor head throughout the procedure. To date, all attempts at head transplantation have been primarily limited to connecting blood vessels. However, the recent development of "fusogens" and their use in the field of spinal anastomosis or the joining of spinal nerves, has opened up a potential solution to fusing the nervous systems of the donor and the recipient during transplantation.Around the same time as Ren’s research, Italian neurosurgeon Sergio Canavero also put forth his own head transplantation protocol that not only addressed reconnecting a spinal cord but was specifically designed for human head transplantation.Canavero’s protocol is based on acute, tightly controlled spinal cord transection, unlike what occurs during traumatic spinal cord injury or simply surgically severing it. He postulates that a controlled transection will allow for tissue integrity to be maintained and subsequent recovery and fusion to occur. His proposed technique claims to exploit a secondary pathway in the brain known as the "corticotruncoreticulo propriospinal" pathway. This gray-matter system of intrinsic fibers forms a network of connections between spinal cord segments. When the primary, corticospinal tract is injured, the severed corticospinal tract axons can form new connections via these propriospinal neurons. One of the most overlooked issues by head transplant researchers is that of pain. In 2015, Valery Spiridonov, a 33-year-old Russian computer scientist who suffers from a muscle-wasting disease, became the first volunteer for HEAVEN, or the "head anastomosis project" led by Canavero, becoming the first man to sign up for a head transplant. However, soon after the announcement, he withdrew from the experiment. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

Laser Weapons

NPaV4KxQbtQ | 11 Jun 2020

Laser Weapons

The concept of using light as a weapon has intrigued weapon designers, for centuries. The first such system hypothesized was the Archimedes heat ray. Maiman operated the first functioning laser at Hughes Research Laboratories in Malibu, California. HISTORY In civilian applications, lasers would soon grow in power. With the ability to focus kilowatts of energy onto a small point, their use in industrial welding and cutting expanded rapidly. Though their initial military use, however, has been more indirect, being used primarily for range finding, targeting, and ordnance guidance. The first use of lasers to damage targets directly were laser blinding weapons. Because relatively low energy levels could permanently blind combatants, their use led to the Protocol on Laser Blinding Weapons in 1995. Lower powered systems intended to temporarily blind or disorient its target, called dazzlers, are still in use today by both the military and law enforcement. Laser systems that directly use highly focused light as a ranged weapon to damage a target are part of a class of arms known as Directed Energy Weapons or DEWs. TACTICAL LASERS One of Boeing’s technology demonstrator consists of a modified "Avenger" air defense vehicle with a laser DEW in place of its missile launcher. As a laser source, this system uses a commercial 2 kW solid-state laser and has demonstrated its effectiveness against unmanned aerial vehicles as well as explosive devices on the ground. "Another, more powerful, tactical development by Boeing is "The Relocatable High Energy Laser System or RHELS. Raytheon has replaced the cannon with an industrial fiber laser, successfully testing the concept against a variety of targets, including incoming mortar rounds. This heat has to be transported out of the solid-state medium, in order to avoid overheating and destroying the laser. Additionally, the non-uniform temperature distribution within the amplifier causes a higher than ideal beam divergence of the resulting laser beam, reducing the delivery energy per target area. Fiber lasers, in particular, are ideal for weapon use due to the ends of the fiber itself being used to form the laser resonator. One notable example has been Northrop Grumman’s Joint High Power Solid-State Laser program, which has produced beams in the range of 100kWs. STRATEGIC LASERS Power levels at this magnitude are predominately achieved by chemical lasers, a focal technology of all strategic military laser programs. Chemical lasers work by using a chemical reaction to create the beam. The involved reactants are fed continuously into the reaction chamber, forming a gas stream, which functions as the light amplifying medium for the laser. Because the gas stream is continuously being produced while spent reactants are vented out of the laser, excess heat does not accumulate and the output power is not limited by the need for cooling. The Advanced Tactical Laser or ATL and the Airborne Laser or ABL have been the two most notable chemical laser DEW programs in recent years. What makes both of these programs so unique is that they are the first aircraft-based laser DEWs. The ATL is a technology demonstrator built to evaluate the capabilities of a laser DEW for "ultra-precise" attacks against communication platforms and vehicles. Powered by a Chemical Oxygen Iodine Laser or COIL, it’s speculated that it’s beam is capable of up to 300 kW. Of all the laser DEW programs explored, the ABL system is arguably the most prominent and recognizable. Built around a Boeing 747 designated as YAL-1, ABL is also powered by a Chemical Oxygen Iodine Laser, though one large enough to produce a continuous output power well into the megawatt range. In addition to the incredible power of its main laser, the ABL also features an adaptive optics system, which is capable of correcting the degrading influence of atmospheric turbulence on the laser beam. On March 15, 2007, the YAL-1 successfully fired it’s laser in flight, hitting its target, a modified NC-135E Big Crow test aircraft. By February 11, 2010, now fitted with a more powerful laser, in a test off the central California coast, the system successfully destroyed a liquid-fuel boosting ballistic missile. Laser defense systems such as the US Navy’s XN1-LaWs, deployed on the USS Ponce and the Israeli Iron Beam air defense system are being used experimentally for low-end asymmetric threats. Though these systems are modest compared to the promises of the multi-billion dollar programs of years past, costing less than one dollar per shot, the versatility of these smaller, less expensive, laser DEWs may prove to be the future of the technology. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The Story Of Electric Vehicle Batteries

3pz5LxiEndA | 26 May 2020

The Story Of Electric Vehicle Batteries

The Tesla 2170 Lithium-Ion battery cell and other high capacity lithium-ion battery cell technologies all represent the first hopeful steps in transitioning society towards a new standard in practical and economical transportation via electric vehicles. HOW BATTERIES WORK The modern incarnation of the electrochemical battery is credited to the Italian scientist Alessandro Volta, who put together the first battery in response to the misguided findings of his colleague, Luigi Galvani. Volta suspected that the electric current came from the two dissimilar metals and it was being transmitted through the frogs’ tissues, not originating from it. Volta had developed the first electrochemical battery, known as a voltaic pile. Individual cells can be combined into configurations that can both increase the total voltage and current capacity. This is known as a battery. On primary batteries, the electrodes become depleted as they release their positive or negative ions into the electrolyte, or the build-up of reaction products on the electrodes prevents the reaction from continuing. This results in a one-time use battery. In secondary batteries, the chemical reaction that occurred during discharge can be reversed. FIRST RECHARGEABLE BATTERY In 1859, the French physicist Gaston Planté would invent the lead-acid battery, the first-ever battery that could be recharged. By the 1880s, the lead-acid battery would take on a more practical form with each cell consisting of interlaced plates of lead and lead dioxide. In the early 1900s, the electric vehicle began to grow in popularity in the United States, after thriving in Europe for over 15 years. Within a few years, most electric vehicle manufacturers had ceased production. NiMH In the late 1960s, research had begun by the global communications company COMSAT, on a relatively new battery chemistry called nickel-hydrogen. Designed specifically for use on satellites, probes, and other space vehicles, these batteries used hydrogen stored at up to 82 bar with a nickel oxide hydroxide cathode and a platinum-based catalyst anode that behaved similarly to a hydrogen fuel cell. The pressure of hydrogen would decrease as the cell is depleted offering a reliable indicator of the batteries charge. Though nickel-hydrogen batteries offered only a slightly better energy storage capacity than lead-acid batteries, their service life exceeded 15 years and they had a cycle durability exceeding 20,000 charge/recharge cycles. By the early 1980s their use on space vehicles became common. Over the next two decades research into nickel-metal hydride cell technology was supported heavily by both Daimler-Benz and by Volkswagen AG resulting in the first generation of batteries achieving storage capacities similar to nickel-hydrogen, though with a 5 fold increase in specific power. This breakthrough led to the first consumer-grade nickel-metal hydride batteries to become commercially available in 1989. REVIVAL OF ELECTRIC CARS Almost 100 years after the first golden age of electric vehicles, a confluence of several factors reignited interest in electric vehicles once again. This initiative intersected with the recent refinement of nickel-metal hydride battery technology, making practical electrical vehicles a viable commercial option to pursue. By the late 1990s, mass-market electric vehicle production had started once again. Taking a more risk-averse approach, many automakers started to develop all-electric models based on existing platforms in their model line up. MODERN ELECTRIC CARS Despite lithium-ion batteries becoming a viable option for electric vehicles, the second half of the 1990s into the mid-2000s were primarily dominated by the more risk-averse technology of hybrid-powered vehicles. And even these successful early models such as the Toyota Prius were generally still powered by Nickel-metal hydride battery technology. At the time lithium-ion batteries were still relatively unproven for vehicle use and also cost more per kWh. Around 2010, The cathode material of lithium-ion cells would once evolve with the advent of lithium nickel manganese cobalt oxide cathodes or NMC. Curiously, Tesla is known for being the only manufacturer who does not use NMC cell technology but rather much older lithium nickel cobalt aluminum oxide cathode, or NCA. COBALT With the surge in consumer adoption of electric vehicles, comes a rise in the demand for the lithium-ion batteries that power them. While roughly half of the cobalt produced is currently used for batteries, the metal also has important uses in electronics, tooling, and superalloys like those used in jet turbines. More than half of the world’s cobalt comes from the Democratic Republic of the Congo. With no state regulation, cobalt mining in the region is also plagued with exploitative practices. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

How N95 Masks Stop Viruses

nQmZou7TaVc | 12 May 2020

How N95 Masks Stop Viruses

Preventing a pathogen from entering our respiratory system, at first glance, may seem obvious. The first thought might be to trap them by preventing particles from moving through a filter. But looking deeper at the problem reveals the true scope of the challenge. With every normal breath we take, we inhale around a half-liter of air. The pressure difference between the atmosphere and our lungs during inhalation, peaks at around 8 cm of water. For comparison, a typical shop vac can pull a vacuum of around 200 cm of water or about 25 times that of our lungs. Pathogens vary widely in size with bacteria generally ranging in size from 1-20 um to viruses which can range from 17nm up to 750nm. The rhinovirus that causes the common cold, for example, is around 30nm in diameter, while HIV, SARS-COV-2, and some strains of influenza hover around 120nm. TYPES OF RESPIRATORS N95 respirators are part of a class of respiratory protection devices known as mechanical filter respirator. These mechanically stop particles from reaching the wearer's nose and mouth. Another form of respiratory protection is the chemical cartridge respirator. These are specifically designed to chemically remove harmful volatile organic compounds and other vapors from the breathing air. Both classes of respirators are available in powered configurations, known as powered air-purifying respirators. N95 The N95 designation is a mechanical filter respirator standard set and certified by the National Institute for Occupational Safety and Health in the United States. The number designates the percentage of airborne particles removed, not their size. While ratings up to N100, that can filter 99.97% of airborne particles exist, N95 respirators were determined to be suitable for short-term health care use, in the 1990s. Other designations include oil-resistant R and oil proof P respirators, which are designed to be more durable and maintain filter effectiveness against oily particles in industrial use. Surgical grade N95 respirators possessing fluid resistance were specifically cleared by the United States Food And Drug Administration for medical use. HOW THEY WORK Modern mechanical filter respirators work, not by ‘netting’ particles but rather by forcing them to navigate through a high surface area maze of multiple layers of filter media. This concept allows for large unobstructed paths for air to flow through while causing particles to attach to fibers due to a number of different mechanisms. In order to achieve the high surface area required, a non-woven fabric manufacturing process known as "melt-blow" is used for the filter media. In this technique high temperature, high-pressure air is used to melt a polymer, typically polypropylene, while it’s spinning. This produces a tough yet flexible layer of material composed of small fibers. Depending on the specifications of the layer being produced, these fibers can range from 100um all the way down to about 0.8um in diameter. How these fibers capture particles are determined by the movement of air through the filter media. The path of air traveling around a fiber moves in streams. The likelihood of a particle to stay within this stream is primarily determined by its size. The largest particles in the air tend to be slow-moving and predominantly settle out due to gravity. Particles that are too small for the effects of gravity, down to around 600 nm, are primarily captured by inertial impact and interception. Inertial impaction occurs on larger particles in this size range. In contrast, particles below 100nm are mainly captured through a mechanism known as diffusion. Random movements of air molecules cause these very small particles to wander across the air stream due to Brownian motion. Because the path taken through the filter is drawn out, the probability of capture through inertial impact or interception increases dramatically, particularly at lower airflow velocities. EFFICIENCY Because of the complex, overlapping methods by which particle filtration occurs, the smallest particles are not the most difficult to filter. In fact, the point of lowest filter efficiency tends to occur where the complementing methods begin to transition into each other, around 50-500 nm. Particles in this range are too large to be effectively pushed around by diffusion and too small to be effectively captured by the interception or inertial impaction. This also happens to be the range of some of the more harmful viral pathogens. Interestingly, the more a respirator is worn, the more efficient it becomes. FLAWS The weakest point on any respirator is how well it seals against the face. Air will always pass through facial leaks because they offer much lower resistance than the respirator, carrying particles with it. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel

The Mystery Of Paperclips

7NRxW9ReNsA | 27 Mar 2020

The Mystery Of Paperclips

As an innovation, the modern paperclip has a peculiar history. HISTORY OF PAPER Paper was first created in China during the first century A. Initially made from cotton and linen, these fabric papers were expensive to produce and were generally reserved for permanent writing. Because of paper’s value, more trivial, temporary writing was done on reusable, clay or wax tablets. By the 19th century, the industrial revolution brought about the invention of wood pulping and industrial paper mills, making paper production inexpensive and widely available. FIRST PAPER CLIPS By dividing the processes of drawing, straightening, forming, and cutting iron into over a dozen individual tasks, each done by a dedicated laborer, pin production became over 1000 times more efficient. Where a single man could barely create 30 pins in a day, this early use of the assembly lines would easily yield production rates of over 30,000 pins. WIRE TO CLIPS Advancements in both metallurgy and mechanization would finally bring about the marvel of modern paper holding technology, the paperclip. The key to this shift from the pins to clips occurred during the 1850s with the introduction of low cost, industrially produced steel. During the last few decades of the 19th century, thousands of patents were issued for almost every shape of formed steel wire that could be conceivably used as a commercial product. THE FIRST PAPER CLIPS Among these early steel, wire-based products were the first paper clips. The earliest known patent for a paper clip was awarded in the United States to Samuel B. Some of these designs, such as the bow-shaped Ideal paper clip and the two-eyed Owl clip, can still be found in use today. Many were created to address specific challenges of managing paperwork. GEM PAPER CLIPS Among them, the "Gem Manufacturing Company'' had arisen as the namesake behind this design with a reference appearing in an 1883 article, touting the benefits of the "Gem Paper-Fastener". However, no illustrations existed of these early "Gem paper-clips" making it unclear if they truly did invent the modern Gem paperclip. Interestingly, aside from Cushman and Denison’s branding claim, even 30 years after its first appearance, the Gem-style paper clip still remained unpatented. Even stranger, in 1899 a patent was granted to William Middlebrook of Waterbury, Connecticut for a "Machine for making wire paper clips." Within the pages of his patent filing was a drawing clearly showing that the product produced was a Gem-style paperclip. OTHER CLAIMS There have been several other unsubstantiated claims to the invention of the modern paperclip. His paper-binding invention was illustrated within his book, though it looked more like a modern cotter pin than a contemporary gem-style paper clip. In 1901, Vaaler was liberally granted patents both in Germany and in the United States, for a paper clip of similar design, though it lacked the final bend found in gem-paper clips. Vaaler would go on to become a national myth posthumously, based on the false assumption that the paper clip was invented by an unrecognized Norwegian prodigy. The gem-style paper clip would remain mostly unchanged over the next 120 years. It would even become a symbol throughout the century. The paperclip would even be commemorated on the nation’s stamp in 1999. Many manufacturers have even attempted to improve on the design by added ripples for a better grip. Still, the simple steel wire gem-style paper clip remains a staple of basic office supply needs even today. It’s ease of use, effectiveness at gripping and storing papers without tangling or damaging them have made it one of the few inventions in human history that has proven to be difficult to improve upon. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchan...

The Evolution Of Cutting Tools

YSdho8y4EoA | 31 Jan 2020

The Evolution Of Cutting Tools

The story begins with how cutting tools evolved from simple paleolithic stone edges to the knives, axes and other basic metal cuttings tools via the copper, bronze, and iron age. From there we look at the discoveries of metallurgy during the industrial era, the rise of steel, and the evolution of machine tools. We explore the advancements of the tooling mills, lathes and shapers used as cutting tool materials moved from high-speed steel to carbides, and other exotic cutting materials. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind SOCIAL MEDIA LINKS Instagram - https://www.instagram.com/newmindchannel/ FOOTAGE USED J.Kacher, G.S.Liu, I.M.Robertson "In situ and tomographic observations of defect-free channel formation in ion irradiated stainless steels" https://www.sciencedirect.com/science/article/abs/pii/S0968432812000182 Mike Williams Basic Carbide - How it's Made https://www.youtube.com/watch?v=95yS7W66-BI

The Story Of Large Vessel Engines

v1iWVwgTuy8 | 03 Jan 2020

The Story Of Large Vessel Engines

A look at the evolution of the engines that power large cargo vessels, as they evolved over the last 100 years. Starting with coal-driven steam reciprocating engines such as the triple expansion engines, steam turbines and finally to modern diesel marine engines. The different configurations of marine diesel are also explored and how their characteristics lend themselves to powering the largest ships in the world. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED harryolynx - Triple-Expansion Marine Steam Engine (1910) In Slow Motion https://www.youtube.com/watch?v=N7pugOzJEyY wartsilacorp - The new LNGPac™ LNG fuel handling system | Wärtsilä https://www.youtube.com/watch?v=bqA0aJNGL10 “Stock footage provided by Videvo, downloaded from https://www.videvo.net”

The Science Of Small Distances

Aw-xbs8ZWxE | 19 Dec 2019

The Science Of Small Distances

We explore the precise measurement and machining of small distances and their importance on modern industrial society. The history of the meter and distance measurements are explained as well as intuitive examples of small distances given, moving from mm scales to the realm of microns. Further, we discuss some of the engineering issues that emerge as we try to machine at smaller tolerances such as fitment, assembly, and thermal expansion. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Mechanical Battery

_QLEERYS5C8 | 14 Nov 2019

The Mechanical Battery

Though more commonly known for its electro-chemical variant, a battery or accumulator is any device that stores energy. Batteries fundamentally allow us to decouple energy supply from demand. But a far lesser-known, mechanical based rechargeable battery based on flywheel energy storage or FESS is showing a resurgence of interest. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED “Stock footage provided by Videvo, downloaded from https://www.videvo.net” Loom Footage James Hargreaves - Spin Doctor https://www.youtube.com/watch?v=an4hi0knlaA Steam Engine 1897 Robb Armstrong Steam Engine https://www.youtube.com/watch?v=Jpz5WmWpWZM Magnax Axial Flux Permanent Magnet Electric Motor / Generator https://www.youtube.com/watch?v=rGu7XDapR58 Cisco DC2011-Texas: Flywheels for Backup Power https://www.youtube.com/watch?v=kQirOFEygJQ Power Line Footage Website: http://www.beachfrontbroll.com

The Science Of Darkness: Pigments, The Cosmos and Nanotechnology

1iV5TqzSlCk | 17 Oct 2019

The Science Of Darkness: Pigments, The Cosmos and Nanotechnology

The first color used by human beings to express themselves artistically was black. The pigment used to create man’s first art was charcoal. A pigment is a material that changes the color of reflected or transmitted light. Unlike paints and inks which rely upon a pigment for color change, dyes chemically bond to the substrate it’s applied too. Early black vegetable based dyes were neither strong or stable and would quickly fade to brown and grey, especially when exposed to sunlight. The obvious contrast offered by the color black would play a pivotal role in how humanity communicates non-verbally, via ink and the written word. In the 15th century Johannes Gutenberg’s introduction of the printing press to Europe inherently required a new type of ink that would be compatible with the process. These new durable inks paved the way for the mass dissemination of ideas through printed books and a new form of artistic expression, the engraved print. Even in the digital age of computer screens, this vestige remains one of the most common formats for presenting text. By the industrial era, organic colorants were beginning to be replaced by superior synthetic compounds. This led to black clothing gradually became the most popular color of business dress in the western world. Our brains perceive color in response to electromagnetic radiation at combinations of the frequencies in the visual spectrum. What we perceive as the color black represents the experience of no visible light reaching our eyes. It’s important to note that black can only occur through absorption. From experiments going back to the 1800s, it was observed that all objects emit radiation but black objects in particular possess different radiant properties than reflective ones. These observations led to a new understanding of the color black. Formalized by the Geman physicist Gustav Kirchhoff in 1860, a black body in its ideal form will absorb all electromagnetic radiation falling on it. In 1900, the British physicist Lord Rayleigh and Sir James Jeans presented the Rayleigh-Jeans law, which attempted to approximate this relationship. This rift between observation and prediction in classical physics was known as the ultraviolet catastrophe. The resolution to this dilemma came from German physicist Max Planck in the form of Planck's law. With an understanding of how black objects absorb and reemit radiation, it became possible to measure temperature and other derived properties at a distance. As black body radiation is emitted from deep within the star, it passes through its outer atmosphere. The only thing that is even closer to an ideal black body, and is in fact is the most perfect black-body radiator, and implicitly the blackest object ever observed in nature is the universe itself. The cosmic microwave background radiation as its observed today is the most flawless example of black-body radiation. In the age of space exploration, materials darker than traditional black pigments would be required for increasing the performance of astronomical cameras, telescopes, and infrared scanning systems. Gold back was one of the fist commonly used light absorbing nano structures. This causes the gold to form a nano-chains structure. These chains overlap on one another, joining together very loosely, resulting in a ‘fluffy’ porous structure that traps light. Gold black can absorb almost 99% of the light that falls on it. The next level of high absorbing coatings would come from the National Physical Laboratory or NPL in the United Kingdom known as super black. Because the super black treatment is only a few microns thick it suffers from transparency in the infrared spectrum, making it only practical for visible light. Taking its name from its structure, vertically aligned carbon nanotube arrays, vantablack absorbs 99.96% of the light that falls on it. Furthermore, vantablack suffered from far less outgassing and degradation making it more suitable for commercial applications. MIT created a technique that produced a material 10 times darker than Vantablack, absorbing 99.995% of visible light from all angles. Though the general mechanism of its "blackness" is similar to Vantablack, the mechanics of why this substrate and technique produced such a dramatic increase in absorption performance remains a mystery. The concept of black will always be intertwined with technological advancement and furthering an understanding of our world. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED Griffith University - Oldest known figurative cave art discovered in Borneo https://www.youtube.com/watch?v=b4-rKQSLFg8 Stock footage provided by Motion Places , downloaded from https://www.videvo.ne Caveman Painting - https://www.youtube.com/watch?v=uCkYZG0qQAo Pronomos Painter Red Figure Pottery -https://www.youtube.com/watch?v=mQhAwhg7H1Q Messages of Christ - https://www.youtube.com/watch?v=yeikqw0kyqI

The Spark Plug Story

smIqDKTm2hE | 19 Sep 2019

The Spark Plug Story

The first documented use of a spark plug in an internal combustion engine was attributed to the Belgian Engineer Jean Joseph Étienne Lenoir in 1859. Lenoir is known for developing the first internal combustion engine, which burned a mixture of coal gas and air. The air-fuel mixture it aspirated was ignited by a "jumping spark" ignition system, which he patented in 1860. Lenoir’s ignition system created sparks by using high voltage electricity to jump an air gap. This was accomplished by sending mechanically generated low voltage pulses through a type of electrical transformer known as a Ruhmkorff coil. The coil would transform the low voltage pulses into lower current, high voltage pulses, suitable for spark generation. Reliably igniting over 20 million combustion cycles while surviving exposure to the extreme temperatures and pressures of ignited fuel would prove to be a formidable challenge. All spark plugs are fundamentally composed of two electrodes separated by an insulator. These electrodes converge at a «spark gap», where spark generation occurs. As the initial current flows from the ignition coil to the spark plug’s electrodes, the flow of electricity is initially blocked by the insulating properties of the air-fuel mixture within the gap. As the voltage pulse ramps up, the potential created between the electrodes begin to restructure the gases within the spark gap. As the voltage increases further, the insulating limit or the dielectric strength of the spark-gap gases begin to break down, causing it to ionize. The first spark plugs had a very minimal set of operational requirements. Their main design concerns were the plug's fit and position and its ability to maintain an operating temperature range that would allow the plug end to self-clean by burning off deposits. The thermal properties of a spark plug are designated by a relative heat range. The emergence of leaded gasoline in the 1930s would also cause aggressive deposit buildup on the mineral insulator ends. To keep up with this, construction was shifted towards a single piece design composed of a ceramic called sintered alumina. Sintered alumina plugs operated at much higher temperatures, which helped counteractact the fouling issues caused by leaded fuels via deposit burn-off. It’s electrical insulation properties also allowed much higher voltages to be used, tolerating up to 60,000 volts. This would be further improved by the addition of ribs which increased the surface area of the insulator. Modern spark plug still used sintered alumina and can tolerate voltages well past 100,000 volts. The next big change in spark plug design would occur in the form of copper core plugs during the 1970s as a direct result of policy changes. In 1974, the US government began to impose fuel mandates and stricter emissions regulations which prompted the removal of lead from gasoline, the introduction of catalytic converts, and the move to smaller more efficient engine designs. By the 1990s computer controlled ignition systems were becoming common and the need for more energetic spark generation with newer higher compression and forced induced engines was becoming apparent. This was accomplished by moving ignition coils into assemblies that sat directly above the spark plug. Known as coil-on-plug ignition, the one coil per cylinder configuration coupled with the shorter direct path of current flow allows for extremely high voltages to be used, often well past 100,000 volts. On modern fuel injected cars, higher compression ratios as well as tighter control of combustion timing is used to extract as much energy as possible, increasing power and efficiency. As an engine’s rotating speed increases, triggering an ignition event slightly before the point of maximum compression within a cylinder, or advancing timing is done to give the combustion process more time to occur. Under certain conditions uncontrolled combustion can be triggered as smalls pockets of air-fuel mixture explode outside the envelope of the normal spark triggered combustion front. This is known as detonation and it can occur when timing is advanced too aggressively or the air-fuel trim is mismatched for the conditions within the cylinder. Engine knock sensors were developed for this task and they functioned as highly tuned microphones, listening for the tones of sound produced on an engine block as it experiences detonation. The ability to accurately manage detonation also kept combustion chamber designs relatively conservative. During the late 1990's several manufacturers were researching better methods to detect detonation. The advent of ionic detonation detection used spark plugs to sense chamber ionization. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Illusion Of Digital Audio

y4mUMZ5SPFE | 30 Aug 2019

The Illusion Of Digital Audio

Though based on principles established in the late 1930's, digital audio encoding started to appear in the telecommunications industry in the 1960's. Research into its commercial use was pioneered by the Japanese national broadcasting organization NHK and Nippon Columbia. This sampling process is known as quantization and its accompoliced by a device known as an analog-to-digital converter or ADC. The second element to encoding audio signals digitally is the how frequently samples of the signal are taken, or the sampling rate. The rate at which an ADC samples a signal determines the frequency response of the digitizing process. In modern digital audio, sampling rates of 48Khz are common, offering an average of 2-3 samples for frequencies at the uppermost limits of human hearing. While PCM is the more popular method of encoding audio signals digitally, other methods such as pulse density modulation or PDM are sometimes used. Where electrical signaling decoupled sound reproduction from its physical connection to vibrating waves, digital sound completely detached the information of sound from any underlying medium. Once an analog signal is digitized it now exists as a stream of bits. No matter how many times it’s copied or transferred between storage media the information always remains exactly as it was initially captured. Audio signal data could now be instantly copied, stored on multiple forms of storage media, and transmitted digitally, never degrading or changing. Consuming the audio stored in a stream of bits is done by first converting the data back to an analog signal, via a digital-to-analog converter or DAC. With audio data now stored effectively as a table of amplitude values, the simultaneous advancement of computing technology could now be harnessed to process audio in more complex and powerful ways. While some of the properties of analog signal processing could be replicated mathematically in software, new forms of analyzing and modifying digital signals were developed. The power of software also allows for incredible flexibility, allowing filters to be modified, structured, and layered in complex configurations without ever changing physical components. One of the largest drawbacks of digital audio data is its storage requirements. In an era where storage capacity was expensive and limited, the notion of hundreds of megabytes being used up for a single piece of media became a hindrance to its practical migration beyond optical disks. The rise of digital video would also inherently require a more efficient method for storing audio. Add to this the emergence of the internet and its eventual transformation to a global media distribution platform, the need for new methods of transmitting digital audio within limited data bandwidth becomes apparent. In digital audio, the metric of bitrate is used to specify the minimum transfer throughput required to maintain realtime playback of a stream. In general purpose data compression, repeated instances of data within a dataset are identified, and restructured with a smaller reference to a single expression of that repeated section. This effectively removes repeated content, reducing storage requirements overall. When the data is uncompressed, the references are replaced with the original repeating content, restoring the dataset perfectly to its uncompressed state. This is known as lossless compression since the act of compressing the data doesn't destroy any information. Lossless compression is used where it is critical that no information is lost as in the case of most data used by computers. In contrast, information that interact with our brain via our senses, such as visual and auditory experiences, behave very differently. Specifically with audio, not every audible frequency of sound that enters our ears is perceived by our brains. This phenomenon is known as auditory masking and it can be exploited to compress digital audio data in a lossy manner. This lossy compression removes significant amounts of information from the signal while still maintaining most of the audio’s fidelity. Frequency masking occurs when a sound is made inaudible by a noise or sound of the same duration as the original sound. This tends to occur when two similar frequencies are played at the same time with one being significantly louder than the other. Temporal masking , in‌ ‌contrast, occurs when a sudden sound obscures other sounds which are present immediately preceding or following it. From extensive research conducted on auditory masking, response models have been developed that map the manner in which our hearing responds to this phenomenon. Masking is used to remove information in the frequency domain in order to compress digital audio. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED Performance Provided By Kevin Sloan Currents of Change for String Quartet https://www.youtube.com/watch?v=8s2KqXFV-Pk

The Evolution Of Music Storage

mVNJvadmLbE | 17 Aug 2019

The Evolution Of Music Storage

The field of audio evolved over the last 170 years starting with the human voice first being imprinted on paper covered in soot, on a device known as the phonautograph. It would evolve into the wax cylinder phonograph and eventually the disc-based one, we know today. Mechanical sound storage would be replaced by electrical-based sound reproduction, via microphones and loudspeakers. The sound could now be transmitted over lines and via radio. The medium of vinyl records would soon be accompanied by the advent of magnetic tape storage. Magnetic tape storage allowed for sound editing and convent storage. As well as being able to be transferred into other mediums such as optical audio. Electrical sound storage also brought the concept of signal processing with it. The frequency components of sound could be analyzed and modified to enhance quality, fulfill storage needs, and for artistic effect. Several common audio signal procession techniques discussed are audio filters, dynamic range compressors and noise reduction. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED Griffith University Oldest known figurative cave art discovered in Borneo https://www.youtube.com/watch?v=b4-rKQSLFg8 Scott's Old Curiosity Shop Down Home Rag played on the 1918 Edison Amberola Cylinder Phonograph https://www.youtube.com/watch?v=RuaK4OpDgUg THEVICTROLAGUY 1857 phonautograph / Phil / phonautogram https://www.youtube.com/watch?v=S5-XLlVHHBE Dot Product LP Vinyl Cut Dot Product https://www.youtube.com/watch?v=KUFRt9f4RUA

The World Of Microscopic Machines

iPGpoUN29zk | 20 Jul 2019

The World Of Microscopic Machines

Micro-electromechanical systems or MEMS are tiny integrated devices that combine mechanical and electrical components. Traditional manufacturing techniques such as milling, turning, and molding become impractical at small scales so MEMS devices are fabricated using the same batch processing techniques used to fabricate integrated circuits. These devices can range in size from a few microns to several millimeters. Because MEMS devices are a hybrid of mechanical and electronic mechanisms, they’re generally fabricated using a combination of traditional integrated circuit technologies and more sophisticated methods that manipulate both silicon and other substrates in a manner that exploit their mechanical properties. In bulk micromachining, the substrate is removed in a manner similar to traditional integrated circuit techniques. Surface micromachining, by comparison, is a predominantly additive in nature and is used to create more complex MEMS-based machinery. Material is deposited on the surface of the substrate in layers of thin films. High-aspect-ratio micromachining differs dramatically from the other two techniques in that it’s reminiscent of traditional casting. The accelerometers used in automotive airbag sensors were one of the first commercial devices using MEMS technology. In widespread use today, they measure the rapid deceleration of a vehicle upon hitting an object by sensing a change in voltage. Based on the rate of this voltage change, the on-die circuity subsequently sends a signal to trigger the airbag’s explosive charge. In most smartphones, a MEMS-based gyroscope complement the accelerometer. They’re also found in navigation equipment, avionics and virtually any modern device that requires rotation sensing. MEMS gyroscopes work by suspending an accelerometer on a platform that in itself uses a MEMS-based solenoid to create a constant oscillating motion. Another hugely successful application of MEMS technology is the inkjet printer head. Inkjet printers use a series of nozzles to spray drops of ink directly on to a medium. Depending on the type of inkjet printer, two popular MEMS technologies are used to accomplish this; thermal and piezoelectric. DLP One of the earliest uses of MEMS devices in the form of large mechanical arrays on a single-die has been for display applications. Invented by Texas Instruments. Each pixel is made of a multi-layered device consisting of an aluminum mirror mounted on hinges. These pixels rest on a CMOS memory cell. Digital micromirror devices form the basis for another emerging application of MEMS technology, electro-optics. These bottlenecks can be eliminated by using fully optical networks that offer far superior throughput capabilities. One of the more promising applications of MEMS technologies has been the emergence of biomedical MEMS devices. Referred to as Bio-MEMS devices, they tend to focus on the processing of fluids at microscopic scales. One of the first and simplest examples of a bio-MEMS device is the micro=machined microtiter plate. A microtiter plate is a flat plate with multiple wells used as small test tubes for testing and analysis. The possibilities with MEMS devices are astounding. Applications from low-loss, ultra-miniature and highly integrated tracking radio antennas to sensors that can measure heat, radiation, light, acoustics, pressure, motion and even detect chemicals. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED Crash Footage IIHS - 2018 Toyota Camry passenger-side small overlap IIHS crash test https://www.youtube.com/watch?v=fEcSdtE4DhE IC Manufacturing Infineon Technologies Austria - A look at innovative semiconductor manufacturing in Villach https://www.youtube.com/watch?v=clpEw69-7jk MEMS Images Courtesy Sandia National Laboratories, SUMMiT™ Technologies, www.sandia.gov/mstc MEMS IC Image courtesy of Vesper Technologies Medical MEMS Devices Prof. Mark R. Prausnitz Georgia Institute of Technology MEMS array inside Cavendish Kinetics' antenna tuners. Image courtesy of Cavendish Kinetics

Chasing Life On Saturn's Moon: NASA's Dragonfly Mission To Titan

--BNmvuHpAg | 04 Jul 2019

Chasing Life On Saturn's Moon: NASA's Dragonfly Mission To Titan

Chasing Life On Saturn's Moon: NASA's Dragonfly Mission To Titan Released as its latest mission, NASA’s Dragonfly aims to explore Saturn’s largest moon, Titan. Derived from the New Frontiers program, the mission will see a drone like a rotorcraft exploring a multitude of promising locations across the topography. Following a 2026 launch, Dragonfly will arrive in 2034 and will eventually travel 108 miles around Titan with the intention of furthering the Cassini mission. In September 2017, Cassini intentionally plunged itself into Saturn’s atmosphere marking the end of a 13-year mission. Upon its initial arrival, Dragonfly will ground at the “Shangri-La” dune fields, surveying and taking samples within a 5-mile radius of the linear dunes. Believed by NASA to show evidence of past liquid water, the Selk impact crater is the next stop in Dragonfly’s mission where NASA scientists believe the important ingredients for life combined with something that hit Titan in the past, possibly tens of thousands of years ago, may exist. According to NASA “there is evidence of past liquid water, organics – the complex molecules that contain carbon, combined with hydrogen, oxygen, and nitrogen – and energy, which together make up the recipe for life.” Titan has long been an open question and topic of research and scientific assessment, driving the potential of hosting extraterrestrial life. Larger than Mercury, Titan’s atmosphere rains and snows, has surface features like lakes and oceans but of methane and ethane, and underground liquid oceans. With an atmospheric pressure 60% greater than that of Earth and surface temperatures of minus 290 degrees Fahrenheit, it’s believed Titan may not be much different than that of primordial Earth. Non-existent to Earth, NASA scientists are hoping the Dragonfly mission will return valuable data on the strange minerals exclusive to Titan. Suspected to form rings around Titan’s lakes, these minerals include co-crystals made up of acetylene and butane. However, while acetylene and butane are present on Earth, they are solid on Titan. Given its complex chemistry, it's safe to assume that Titan is not hospitable to humans, but remains attractive to researchers and further study.

How The Ollie Works: The Physics Of Skateboarding's Most Common Trick

zAsdpRHdpDM | 29 Jun 2019

How The Ollie Works: The Physics Of Skateboarding's Most Common Trick

How The Ollie Works Originated in the 1970s by Alan (Ollie) Gelfand, his eponymous maneuver allows skateboarders to leap and arch over obstacles. But, how exactly is this move accomplished with seamless choreography? From the point of view of the uninitiated, the ollie seems almost mystical. However, when we peel back and define the sum of its parts, the ollie is an orchestrated sequence of balance, movement, and forces applied via the skateboarder's body. It’s all reduced to a few principles of physics - force, torque, Newton’s Third Law and friction. To perform the ollie, a skateboarder unintentionally focuses on two factors - the net force and net torque applied to the board. Force, similar to gravity, pushes and pulls objects in a specific direction and torque rotates objects in a specific direction relative to the object’s pivot point. As the skateboarder approaches the obstacle; he begins to bend his knees; pull in his arms and close his shoulders in preparation of the vertical thrust necessary to clear the obstacle - essentially pushing down on the ground. However, Newton’s third law states “For every force you apply, there will be an equal and opposite force applied back onto you.” Or otherwise said, the ground will push back at the skateboarder with an equal and opposite force thrusting him into the air. This is also known as “pop”. During the “pop” phase, the skateboarder opens his shoulders and extends his arms and legs. This orchestrated sequence of events affects the torque of the skateboard. The act of pushing down on the tail of the board causes the back wheels to become a pivot point - the tail end of the skateboard rotates down and the nose rotates up leaving the board at an angle. However, an additional step is further required to clear the obstacle. As the parabola begins, the skateboarder then uses friction to lift the board even higher. Friction is a force that opposes motion between two surfaces. So in this case, the skateboarder glides his front foot up the grip tape. The opposing interaction of these two surfaces causes the board to move further upward. But in order for the skateboarder to clear the obstacle, he must then push his front foot down. The act of pushing his front foot down changes the pivot point to the center of the skateboard generating torque against the board once again. The back of the skateboard goes up and the front goes down creating the apex of the parabola. All that’s left is for the skateboarder to let gravity bring him back down to the ground - bending his knees to absorb the impact.

The Computers Behind NASA's Mars Curiosity Rover

1eUddT5BJ78 | 27 Jun 2019

The Computers Behind NASA's Mars Curiosity Rover

The Computers Behind NASA's Mars Curiosity Rover Of all the systems onboard that have furthered our understanding of the Martian landscape, none have been as critical and as overworked as curiosity's onboard computer system. Curiosity’s entire mission relies primarily on two identical on-board rover computers, called Rover Computer Element or RCE’s. These single board computers were designed to be hardened from the extreme radiation of space, safeguarding it against power-off cycles. Each computer has 256 kilobytes of EEPROM, 256 MB of DRAM, and 2 GB of flash memory. They both run a safety-critical, real-time operating systems know as VxWorks. VxWorks is used heavily in the aerospace and defense industries and can be found in the avionics systems of a variety of aircraft. At the heart of the Rover Computer Elements is one of the most expensive CPU systems available, the BAE systems RAD750. Costing over a quarter million dollars per system board, the RAD750 CPU is a 10.4 million transistor radiation hardened processor that had been proven in dozens of space-based deployments since 2005. The single core CPU is based on a Power PC 750 architecture and can be clocked anywhere from 110 to 200 Mhz offerings over 266 million instructions per seconds of processing power, and operating on only 5 watts. It’s manufactured on a die almost twice the size of its commercial counterparts, employing a 250 or 150nm photolithography process comparable to commercial semiconductor manufacturing of the late 1990s. This process contributes to the CPU’s immunity to radiation and tolerance for the extremely high-temperature swings of space. The RAD750 can handle between -55 degree C all the way up to 125 degrees C. The threat posed by radiation on silicone-based microelectronics can be both disruptive and destructive. High-energy particles can cause a Single Event Upset in which radiation causes unwanted state changes in memory or a register, disrupting logic circuity. Destructive strikes known as Single Event Latchup, Single Event Gate Rupture, or Single Event Burnout are permanent effects of radiation that can pin logic circuity into a stuck state, rendering them useless. The RAD750 is capable of withstanding up to 1 million rads of radiation of exposure. This level of hardness is 6 orders of magnitude more resistant than standard consumer CPUs. When Curiosity landed on Mars in 2012, it operated on one of its RCE’s, known as the “Side-A” computer. Immediately after landing a major software update was sent to the rover, flushing out the no-longer-needed entry, descent and landing application and replacing them with software optimized for surface operations. This was due to both the memory restrictions of the computers and the need for post-launch software development. However, by the 200th day of the mission, the side A computer started to show signs of failure due to corrupted memory. The rover got stuck in a boot loop, which prevented it from processing commands and drained the batteries. NASA executed a swap to the Side-B computer so that engineers could perform remote diagnostics on Side-A. In the following months, it was confirmed that part of Side-A’s memory was damaged. The unusable regions of memory were quarantined, though NASA decided to keep Side-B as the primary computer due to the larger amount of usable memory. The Side B computer would operate for most of Curiosity’s mission but in October of 2018 computer issues would surface again when it began experiencing problems that prevented the rover from storing key science and engineering data. Left with no other options, the curiosity team spent a week evaluating the Side-A computer and prepared it for swapping back in as the primary computer. With Side-A was once again active, the Curiosity team was able to investigate the issues of the side-B computer in greater detail, determining that it also suffered from faulty regions of memory. Similar to how Side-A faults were handled, the bad regions of side-b memory would also be flagged and quarantined from use. As of June 2019, Curiosity is still operating on its Side-A computer, on the lower memory capacity caused by its initial failure. However, on March 12th, 2019, the side-A computer experienced a computer reset that triggered the rover's safe mode. This was a cause for concern as it was the second computer reset in three weeks. Both resets were caused by a corruption in the computer's memory, suggesting further damage within the memory of the side A computer. Despite the glitches, the Curiosity rover still remains functional on its Side-A computer with the team contemplating an eventual switch to the side-B system. But with the slow decline of both computer’s memory systems, it’s possible that the death blow to Curiosity's extraordinary mission performance may come from within the hand full of chips that form the memory of its computers.

The Science Of Flatness

OWa3F4bKJsE | 22 Jun 2019

The Science Of Flatness

Flatness is an often misrepresented property of our own intuition. Many of the objects we consider flat, pale in comparison to surfaces manufactured to actually be flat. It's also a property that our industrialized world relies on to function. While most of us experience flatness as part of aesthetics and ergonomics, flatness in manufacturing is a critical property of positioning, mating and sealing parts together. The high pressures produced by combustion are contained by two mating flat surfaces aided by a gasket. Let's look at a sheet of float glass. The floating process self levels the glass, giving it a relatively flat, uniform thickness. Let's say a manufacturer's specification calls for a 3mm thick sheet of glass. For a sheet to pass a quality check, its thickness is sampled at various points along its length and as long as it is 3mm thick, plus or minus a specified tolerance, the sheet passes. But what if during the process of moving the floating ribbon of molten glass a subtle disturbance is introduced to the molten metal. Let's say this disturbance imparts a 0.25 mm wave-like undulation throughout the entire ribbon. Now to eye the cut sheets would appear flat and they would pass the quality check for thickness, but the surface of that sheets of glass is far from flat. Flatness isn’t derived from how closely a part matches its specified dimension. It is a property completely independent of the part’s gross shape. If we take a surface and sandwich it between two imaginary parallel planes. The gap between the planes that encompasses the surface is known as a tolerance zone. The smaller this distance the flatter the specification. On parts that do explicitly define flatness the method of both measuring and producing flatness is determined by how tight of a tolerance zone is required. Flatness specifications down to around 10 microns or about 4/10,000th of an inch are quite common in machinery. Those mating and sealing surfaces found in car engines can be found at this level of flatness. Sealing in fluids at this level of flatness requires the use of a gasket. Field testing flatness at this level is done with a known precise flat edge and a clearance probing tool known as feeler gauges. Actually measuring the flatness of a surface is a lot more complicated. An obvious solution would be to measure the surface against a flat reference. For example, if a part has a surface parallel to the surface to be measured it could be placed on a surface plate. A surface plate is a flat plate used as the main horizontal reference plane for precision inspection. A height gauge could then be used to probe the top of the surface for flatness relative to the surface plate. If we first place the part to be measured upon 3 columns with adjustable heights. Then, with a height gauge, run the probe across the surface while looking at the amplitude of the needle, we get a snapshot of the difference between the highest and lowest point on that surface. Automating the process with the use of a coordinate measuring machine or CMM is a common practice. CMMs are typically computer-controlled and can be programmed to perform the tedious repetitive measurements. Going beyond the 10-micron levels of flatness requires the use of surface grinding. This process typically used to produce precision parts, precision fixtures, measurement equipment, and tooling. Lapping is the process of rubbing two surfaces together with an abrasive between them in order to remove material in a highly controlled manner. In lapping a softer material known as a lap is "charged" with an abrasive. The lap is then used to cut a harder material. The abrasive embeds within the softer material which holds it and permits it to score across and cut the harder working material. Wringing is the process of sliding two ultra flat faces together so that their faces lightly bond. When wrung, the faces will adhere tightly to each other. This technique is used in an optics manufacturing process known as optical contact bonding. When an optical flat's polished surface is placed in contact with a surface to be tested, dark and light bands are formed when viewed with monochromatic light. These bands are known as interference fringes and their shape gives a visual representation of the flatness. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind FOOTAGE USED Stähli Side Flathoning Machine https://www.youtube.com/watch?v=tYv7dcFPAYA Courtesy of Stähli Lapping Technology Ltd - www.STAHLI.com oxtoolco (Tom Lipton) - Precision Lapping 101 https://www.youtube.com/watch?v=j9FsmsjXKx8 oxtoolco (Tom Lipton) - Russian Optical Flat Testing https://www.youtube.com/watch?v=8I5P-r4ogm4 Pierre's Garage - Very cheap unique wavelength light for using with Optical Flats... Using a 532nm. 50mw LED laser https://youtu.be/xSYl7q6yKPU Joe Pieczynski - How to Accurately Inspect a Flat Surface https://www.youtube.com/watch?v=WwUqPiQ9JAQ

The Stunning Images Of Mars: Curiosity Rover

D0VFPAZ50Yk | 15 Jun 2019

The Stunning Images Of Mars: Curiosity Rover

The Stunning Images Of Mars: Curiosity Rover This is the Curiosity rover. Designed initially to explore the crater Gale on Mars as part of NASA's Mars Science Laboratory mission, Curiosity was launched from Cape Canaveral on November 26, 2011, and landed inside Gale on Mars on August 6, 2012. The landing site of the car sized-rover was less than 1 ½” miles from its touchdown target after completing a 350 million mile journey. Its goal was to investigate Martian climate and geology and assess if environmental conditions were favorable for microbial life. It would also go on to conduct planetary habitability studies in preparation for human exploration of Mars. Curiosity's two-year mission was would be extended indefinitely and continues to send back images and data to this day. This is a visual tour of its mission. Image 2 - This mosaic taken at the rover’s landing site in the Gale Crater was created by using 27 images from its mast-mounted Left Navigation Camera. Image 3 - Looking at Curiosity's landing site in color reveals the gravelly area surface of the Gale Crater. The terrain falls off into a depression and beyond that is the boulder-strewn, red-brown rim of a moderately-sized impact crater. Farther off in the distance, there are dark dunes and then the layered rock at the base of Mount Sharp. Image 4 - This image from the Mars Hand Lens Imager camera shows a small bright object on the ground beside the rover. The object is about half an inch long and the rover team believes this object to be debris from the spacecraft, possibly from the events of landing on Mars. Image 5 - This is the "Shaler" outcrop taken during the 120th day of Curiosity's mission. Its dramatically layering patterns suggested evidence of past streamflow in some locations. Image 6 - This is a view of the "John Klein" location selected for the first rock drilling by NASA's Mars rover Curiosity taken during the afternoon of the 153rd Martian day of Curiosity's mission. The veins giving rise to evidence of a wet past are common in the flat-lying rocks of the area. Image 7 - Called the "mini drill test," Curiosity used its drill to generate this ring of powdered rock for inspection in advance of the rover's first full drilling. Curiosity performed the mini drill test during the 180th Martian day of its mission. Image 8 - This is Mount Sharp, also known as Aeolis Mons, its a layered mound in the center of Mars' Gale Crater, rising more than 3 miles above the floor of the Gale crater. Lower slopes of Mount Sharp were a major destination for the mission where it searched evidence of a past environment favorable for microbial life. Image 9 - This the view of an outcrop called "Point Lake." The outcrop is about 20 inches high and pockmarked with holes. Curiosity recorded the 20 component images for this mosaic on the mission's 302nd Martian day. Image 10 - This scene combines seven images from the telephoto-lens camera onboard Curiosity. The images were taken on the 343rd Martian day of the mission. The rover had driven 205 feet the day before to arrive at the location providing this vista. The center of the scene is toward the southwest. A rise topped by two gray rocks near the center of the scene is informally named "Twin Cairns Island." Image 11 - This mosaic of images are from geological members of the Yellowknife Bay formation, and the sites where Curiosity drilled into the lowest-lying member, called Sheepbed, at targets "John Klein" and "Cumberland." The scene has the Sheepbed mudstone in the foreground and rises up through Gillespie Lake member to the Point Lake outcrop. These rocks record superimposed ancient lake and stream deposits that offered past environmental conditions favorable for microbial life. Rocks here were exposed about 70 million years ago by removal of overlying layers due to erosion by the wind. Image 12 - This scene combines images taken during the midafternoon of the mission's 526th Martian day. The sand dune in the upper center of the image spans a gap, called "Dingo Gap," between two short scarps. Image 13 - This look back at a dune that the Curiosity drove across was taken during the 538th Martian day. The rover had driven over the dune three days earlier. Image 14 - The scene combines multiple images taken with both cameras of the Mast Camera (Mastcam) on Curiosity during its 1,087th Martian day. Taken at the lower slope of Mount Sharp and Spanning from the east, to the southwest it shows Large-scale crossbedding in the sandstone. This is a feature common in petrified sand dunes even on earth. Image 15 - Curiosity recorded this view of the sun setting at the close of the mission's 956th Martian day. This was the first sunset from the martian surfaced, observed in color by Curiosity.

How Pilots Land Blind

0om1irJondE | 31 May 2019

How Pilots Land Blind

How Pilots Land Blind Landing an aircraft requires a delicate balance of navigating, managing the descent of an aircraft and maneuvering at low speeds while still maintaining a safe level of lift to stay in the air. Add radio communications, weather, and air traffic to the equation and it quickly becomes a complex operation. Civil aviation is broken down into two sets of operating rules, VFR or visual flight rules and IFR or instrument flight rules. Visual flight rules allow an aircraft to fly solely by reference to outside visual cues, such as the horizon, nearby buildings and terrain features for navigation and orientation. Aircraft separation is also maintained visually. Visual flight rules require VMC or visual meteorological conditions. Commercial airliners generally operate under instrument flight rules. Instrument flight rules are required in order to operate in weather conditions below visual meteorological conditions or instrument meteorological conditions. They’re also tracked and directed by air traffic control along their routes, relying on them to maintain safe separation from other traffic within controlled airspace. The landing phase of an airliner can be broken down into 5 segments - Arrival, Initial approach, intermediate approach, final approach, and missed approach. The final approach is the last leg before a successful landing. It can begin either from a final approach fix, an inbound vector or a procedure turn made at the end of the intermediate segment. It typically begins at a distance of 5–10 nautical miles from the runway threshold. If the pilot fails to indefinity the runway a missed approach or “go around” is initiated. This allows the pilot to safely navigate from the missed approach point to a point where they can attempt another approach or continue to another airport. The final approach of an aircraft can be executed in several ways. If visual meteorological conditions exist and the pilot accepts it, air traffic control may direct a visual final approach. Instrument approaches come in two primary types, non-precision, and precision. One of the most commonly used types of instrument approaches in landing is a precision approach system known as Instrument Landing System or ILS. The first port of ILS is known as its localizer. The antennas left of the runway centerline emit an ILS radio signal with a 90Hz tone, while those on the right modulate a 150Hz tone. The second component of ILS is known as its glide slope. This system operates similar to the localizer, except in the vertical plane and on a separate radio channel that is matched to the localizer’s channel. The decision height is the altitude in which the pilot must forego ILS guidance and identify a visual reference to the runaway. Category I ILS approaches are the most common type and are available to all ILS capable aircraft, including small single-engine planes. Category II and III ILS approaches are where the critical use of aircraft automation comes into play. Known as autoland, this is done because of the low decision heights. The autopilot systems on modern airliners are sophisticated, heavily used systems that can automatically adjust flight control surfaces in order to maintain altitude, turning maneuvers, headings, navigation points, and approaches. In a triple redundancy system, if one of the autopilot computers requests a control input that diverges from the other two, it gets voted out of the autopilot system. In a double redundancy system, if the two computers diverge in control input, the smaller input is used and then the autopilot system takes itself of line, triggering a fault. The ILS ground system for both CAT II and CAT III approaches are constantly monitored for fault and must be able to switch to back-up generators quickly if power is lost. As the aircraft enters the final approach it is configured for landing. Lift modifying devices such as flaps, slats are spoilers are extended. For an autoland procedure, two pilots are required. If visual contact is made with the runway, the autopilot will begin to throttle back power and begin a pitch-up maneuver known as a flare, in order to reduce the energy of the aircraft. As the autoland proceeds after touchdown, automatic braking is deployed to slow the aircraft down. The most hazardous part of an autoland approach is during the flare into roll-out transition as it offers a very little margin of time for responding to a failure. The future of navigation aids suitable for automated landing lies in Ground-Based Augmentation System or GBAS. GBAS combines GPS with fixed ground-based reference stations to achieve positional accuracies in the sub-meter ranges. --- Divider by Chris Zabriskie is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Stock footage provided by Videvo, downloaded from https://www.videvo.net

The Evolution Of CPU Processing Power Part 3: The Origin Of Modern Operating Systems

NTLwMgak3Fk | 26 Apr 2019

The Evolution Of CPU Processing Power Part 3: The Origin Of Modern Operating Systems

SERIES LINK - https://www.youtube.com/playlist?list=PLC7a8fNahjQ8IkiD5f7blIYrro9oeIfJU During the 1960s into the 1970s, the multitasking paradigm was gaining traction in the mainframe world. Initially, the concept was implemented in a cruder form known as multiprogramming. Multiprogramming was accomplished by processing programs in batches, jumping between them during regions of code that wait for hardware input. This would eventual evolving into time-sharing. By the late 1960s, true multitasking started to emerge in operating systems such as DEC’s PDP-6, IBM’s OS/360 MFT, and MULTICS. MULTICS would heavily influence the development of UNIX. In a traditional single process environment, the program being executing generally has full control of the CPU and its resources. This creates issues with efficient CPU utilization, stability, and security as software grows more complex. In multitasking, CPU focus is shuffled between concurrently running processes. Cooperative multitasking was used by many early multitasking operating systems. Whenever a process is given CPU focus by the operating system, it relies on the process itself to return control back. Preemptive multitasking solved the stability problems of cooperative multitasking by reliably guaranteeing each process a regular period or “time-slice” of CPU focus. We also need a way to prevent a process from using memory allocated to another process but also allow them to communicate with each other safely. The solution to this is a layer of hardware dedicated to the task in between the CPU and RAM called a memory management unit or MMU. If a process attempts to access memory outside of protection rules a hardware fault is triggered. One some MMU’s the concept of memory access privileging is incorporated into memory management. By assigning levels of privilege to regions of memory, it becomes impossible for a process to access code or data above its own privilege level. This creates a trust mechanism in which less trusted, lower privilege code cannot tamper with more trusted, critical code or memory. Virtual memory is a memory management technique that provides an abstraction layer of the storage resources available on a system. While virtual memory comes in various implementations, they all fundamentally function by mapping memory access from logical locations to a physical one. In January of 1983, Apple released the Lisa. It would soon be overshadowed by the release of the Apple MacIntosh one year later. The Macintosh product line would eventually grow dramatically over the years. The Macintosh ran on the Motorola 68K CPU. What made the 68K so powerful was its early adoption of a 32-bit internal architecture. However, the 68k was not considered a true 32bit processor but more of a hybrid 32/16 processor. Despite these limitations, it proved to be a very capable processor. Despite these limitations, the 68k did support a simple form of privileging that made hardware facilitated multitasking possible. The 68K always operates in one of two privilege states the user state or the supervisor state. By the end of 1984, IBM took its next step forward with the release of its second generation of personal computer, the IBM PC AT. Among some of the new software developed for the AT was a project by Microsoft called Windows. With initial development beginning in 1981, Windows 1.0 made its first public debut on November 10, 1983. The 80286 was groundbreaking at that time in that it was the first mass-produced processor that directly supported multiuser systems with multitasking. The first was the elimination of multiplexing on both data and address buses. The second advancement was the moving of memory addressing control into a dedicated block of hardware. The third major enhancement was an improved prefetch unit. Known as its instruction unit the 80286 would begin decoding up to 3 instructions from its 8-byte prefetch queue. The 80286 was capable of addressing 24 bits of memory, or 16MB of RAM making the 8086 memory model insufficient. To make use of the full 16MB as well as facilitate multitasking, the 80286 could also operate in a state known as protected mode. Segment descriptors provide a security framework by allowing write protection for data segments and read protection for code segments. If segment rules are violated an exception occurs, forcing an interrupt trigger of operating system code. The 80286’s MMU tracked all segments in two tables. The global descriptor table or GDT and the local descriptor table or LDT which combined could potentially address up to 1GB of virtual memory. The interrupt structure of protected mode is very different from real mode in that it has a table of its own, known as the interrupt descriptor table. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Wigner's Friend Paradox: Is Observation Inherently Flawed?

5AodzEpvzZw | 29 Mar 2019

Wigner's Friend Paradox: Is Observation Inherently Flawed?

In 1961, the Nobel Prize-winning physicist Eugene Wigner conceptualized a thought experiment revealing a little-known paradox of quantum mechanics. Wigner’s thought experiment demonstrated a strange quirk of the universe. It allows for two observers to experience two different realities from the same event. It’s based on a quantum measurement of a physical system. In our example, we’ll use the polarization of a single photon. Polarization is a property of photons that when measured, can be described linearly as being either horizontal or vertical. However, in quantum mechanics, until that measurement is made, the photon exists in both polarization states at the same time. This is known as a superposition. The collapse of a superposition into a known state is a fundamental quantum principle illustrated famously by the double slit experiment. In the double slit experiment, an electron beam is projected through two slits, creating a wavelike interference pattern over time. Once a measurement device is placed in front of one of the slits, probing the electron, the interaction of the measurement device with the electron causes the wave to collapse into a defined electron path. In Wigner’s thought experiment, he envisions a friend working in a separate lab from his own. His friend is tasked with measuring the polarization state of a single photon and recording the result. To an observer outside of the lab, the friend’s measurement is a unitary interaction that leaves the photon and friends record in an entangled state. The recorded state and the polarization state of the photon are in effect the same information. Wigner himself can observe the experiment, though he is provided no information about his friend’s measurement or the recorded outcome. Without this information, and with no means of interacting with the photon, quantum mechanics forces him to assume that the photon’s polarization is in a superposition of all possible states. When Wigner finally asks his friend to read him the recorded measurement, from his perspective the superposition state of the photon now collapses into a single polarization state. Wigner and his friend’s record now both share the same polarization state information as that of his friend's measurement. This creates a curious implication. During the period of time when Wigner is forced to view the photon as being in a superposition, his friend views the same photon as having a defined state. But since both points of view must be regarded as equally valid, this is where an apparent paradox comes into play. When now exactly did the collapse of the photon’s superposition occur? Was it when the friend had finished his measurement, or when the information of its result entered Wigner's consciousness? This paradox potentially creates a rift in the foundation of science itself, calling into question the nature of measurement. Can objective facts even exist? Scientists carry out experiments to establish objective facts, but if they experience different realities how can they agree on what these facts might be? Wigner’s thought experiment was put to the test by a team of physicists at Heriot-Watt University in Edinburgh with the results being published in February 2019. The experiment tested for the validity of observer-independence at the quantum level, similarly using photon polarization. Using an extended variant of Wigner's friend scenario, the results of that experiment lends considerable strength to Wigner’s thought experiment and interpretations of quantum theory that are observer-dependent. This view of reality questions if observers have the freedom to make whatever observations they want. It also brings locality into question. Locality limits an object to only be directly influenced by its immediate surrounding. It prevents interactions at a distance. Several counter-arguments have been proposed against the paradox created by Wigner’s experiment. The most obvious counterpoints are those of flaws in assumptions made within the experiment itself. Some have proposed that a privileged position as an ultimate observer, encompassing the entire experiment would be able to reconcile the paradox by viewing the experiment from a larger world view. Others have suggested that conscious observation without interaction do not count as observation, therefore no paradox is created. Some interpretations have even speculated that the observation is personalized due to the information available to the observer and that there is no paradox created, just missing information. To date, no framework of quantum mechanics offers a full explanation for the implications of Wigner’s thought experiments. And with experimental evidence edging closer to undermining the idea of observer-independent objective reality, the fundamental assumptions of science itself may be in danger.

What Color Is Darkness?

s5sCkng9tLo | 13 Mar 2019

What Color Is Darkness?

What Color Is Darkness? What’s the darkest color you’ve ever seen? To most people, the obvious answer would be the black, more specifically the black of total darkness. After all, black by definition is the absence of light. But total darkness isn’t as dark as you might think. Paradoxically, we need light to see the darkness. Let's try to understand how we perceive darkness with a little experiment. Pick a black object in front of you and stare at it for about second …. Now close your eyes for a few seconds and allow them to adjust. You may need to cover your eyes if you're in a bright room or outside. Now open them quickly and look at the black object. It may take you a few tries to fully see it, but you’ll notice that the black object actually appears darker than the black of total darkness. This phenomenon is caused by the way we see the world. When our eyes are open the light-sensitive layer of cells at the back of our eyeballs, called the retina, are bombarded by packets of light energy called photons. The photons that represent visual light trigger nerve impulses on the retina that pass via the optic nerve to the brain, where a visual image is formed. But when we close our eyes or are in total darkness most people see a vague grey field usually composed of changing regions of tiny black and white dots. This color is called Eigengrau, a German word that means “intrinsic gray”. What we’re seeing is actually visual noise and it’s the “static” of our retinas. The photoreceptors in the human retina come in two flavors. Rods and cones. Rods are responsible for vision at low light levels. This is called scotopic vision. They lack spatial accuracy or the ability to mediate color, and they exist primarily around the outer edges of the retina, forming a large part of our peripheral vision. When photons hit rod cells, a photoreceptive pigment within the cell, called rhodopsin, changes its shape. This initiates the processing of triggering a nerve impulse. Rhodopsin can also change its shape spontaneously from ambient heat. This triggers a false nerve impulse. The rate of these spontaneous false triggers is temperature dependent, though in humans it averages about once every 100 seconds. With over 120 million rod cells in the human eye, the cumulative effect of these false triggers forms the visual noise we see in total darkness. Because of this intrinsic visual noise, most of us have never actually experienced true darkness. The same amount of visual noise is present whether our eyes are open or closed. But we usually don’t see it when we look at the world. So how can we see an object as darker than the visual noise if we can’t see the noise itself? When our brains process visual information, contrast is more important than absolute brightness. Darkness as we see it is relative to the brightest thing we're looking at. To see how much our brain prioritizes contrast, let's do another experiment. In a few moments, this video will show all black for a few seconds. If you’re on a smartphone or a tablet, pause the video during this and go full screen so that the entire screen is filled with black. First, look at the screen in a lit area, then look at the screen in a totally dark room. In a lit area, the brighter our surroundings, the darker the black appears on the screen. But in a room with no light, because the device’s screen is backlight, that same black now becomes the brightest object in view and appears as a glowing greyish color. The contrast of the visual noise of our retinas is relatively low. Combined with the low spatial accuracy of rod cells, it all blurs together into the vague grey field when we close our eyes. With our eyes open and taking in light, that contrast of that noise is dwarfed by the contrast of the world around us. The relative intensity of a dark object is much further away from a bright object when compared to the intensity of visual noise. An extreme example of this is a clear, moonless night sky. The darkness of space appearing simultaneously with the brightness of stars creates our representation of the night sky as a black canvas dotted with bright white dots. Our brains quite literally synthesize the darkness we can never truly experience.

How Engines Are Becoming More Fuel Efficient

q7vgCojcetE | 08 Mar 2019

How Engines Are Becoming More Fuel Efficient

The abrupt rise of fuel economy in the US was a direct result of a shift of fuel economy policy in 1975, in response to the oil price shock of the early 1970s. This caused a transition towards smaller cars with less powerful, smaller engines. Manufacturers took note of this and started exploring technologies that would bring power and robustness back to their vehicles while still maintaining good fuel economy. An engine extracts energy from the burning of gasoline. It does this by first taking in a mixture of fuel and air into a cylinder. The total working volume of all of the cylinders in an engine is known as its displacement. It then compresses the mixture and ignites it with a spark plug. As the mixture burns, it expands, pushing down a piston, which rotates a crankshaft. The spent gases are then pushed out through the exhaust. Power is sent from the rotating crankshaft, through the drivetrain, then to the wheels. The first step is to reduce the size of the vehicle. We can now reduce the size of the powertrain. A lower displacement engine with fewer cylinders loses less energy getting power to the wheels. This is called a parasitic loss. The amount of fuel-air mixture an engine can aspirate to create power is directly related to its displacement and number of cylinders. By reducing engine displacement size, you lower the amount of power an engine can make but also the amount of fuel it consumes. The first steps were to control the fuel usage of the engine more accurately. In order to do this, we need to understand when fuel is used most and why. Engines in cars have 5 modes of operation. Starting, idling, accelerating, cruising, and decelerating. Acceleration and cruising. These two modes are where most fuel consumption occurs. Throttling open an engine to make more power is where its highest fuel consumption occurs. Cruising, on the other hand, occurs when the throttle is held slightly open, keeping the engine speed and power output steady. This is where we can hone in the fuel efficiency of an engine. Most of the fuel we use driving is caused by a combination of short bursts of acceleration and longer periods of cruising. The key to balancing power and fuel economy is having strong acceleration characteristics but efficient cruising characteristics The ideal ratio of air to gasoline is 14.7 to 1. This is known as a stoichiometric mixture. But in practice, this ratio becomes difficult to achieve. To compensate for this more fuel is added, enriching it. This allows more fuel to be burned without ideal mixing. Enriching is used primarily under acceleration to ensure maximum power generation. Unburnt fuel is wasted. With cruising, since our power requirements are constant and relatively low, mixtures closer to 14.7 to 1 or even slightly higher are used. This is known as running lean since were not utilizing all of the air in combustion. Running lean uses less fuel but can be damaging. Uncontrolled self-ignition of the mixture is called detonation and it can cause overheating and damage to an engine. Incoming fuel is used to cool the combustion chamber and control the rate of burning, reducing the chances of detonation. This limit how lean we can run an engine. Up until the 1980s, most cars relied on carburetors to meter out fuel. Because of its mechanical nature, carburetors lack precise control over air-fuel mixture and required maintenance to keep them functioning correctly. Electronic fuel injection was embraced by manufacturers. Fuel injection works by precisely spraying pressurized fuel through computer controlled injectors. The computer that meters out fuel is known as an engine control unit or ECU. Some of the key parameters measured are engine rpm, air temperature, air flow into the engine, throttle position and engine temperature. Manufacturers could now tune fuel systems much closer to the ideals for both power and fuel economy. Because the injected fuel is sprayed at higher pressures, better air-fuel mixing occurs. It requires less enrichment overall and improves both fuel economy and power. On most engines, the fuel injection system and the ignition system are merged. This allows the ECU to adjust the ignition point timing relative to the combustion cycle. Creating a spark earlier in the cycle, or advancing the timing can produce more power by starting combustion sooner. Another advantage of fuel injection is that it allows for the use of feedback in the fuel delivery system. During cruising, the leanness of combustion is monitored by an oxygen sensor is in the exhaust stream, providing feedback to the ECU. The ECU can use this data to trim the air-fuel mixture closer to ideal, boosting fuel economy. Sensors to detect detonation are also present on some fuel injection systems. Early sensors work by listening for the acoustical signature of detonation on the engine block. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Welding With Friction

D1NcfXxtKng | 03 Mar 2019

Welding With Friction

Welding With Friction When we examine closely we realize there are minute forces at work. What allows the male cricket to find his mate, or concert halls to be filled with orchestral sounds with every drag of the violinist’s bow, or the batter to slide to home plate, or even skateboarders to manipulate their boards to perform amazing aerial tricks. Friction, Lubrication, and wear are the key assumptions in which Tribologists study and apply these aforementioned principles. Tribology, a derivative of the Greek word “Tribo” for “I rub” and “ology” meaning “study of” was pioneered in 1493 by Leonardo Da Vinci only to be lost in the annals of history and later rediscovered. In this segment, our focus will be on the surface imperfections of materials and the interaction of these irregularities. We will take a deep dive and explore how these microscopic defects can create thermal energy when kinetic energy is applied and the application of this transformation in an industrial setting. Such as the chirps created when the cricket rubs teeth-like ridges on its wings together, or the rosin that give the bow hairs friction when dragged against the violin’s strings, or the batter’s uniform sliding against the infield clay, or the case of the skateboarder gliding his shoes against the rough textured grip tape to execute a kickflip. Let’s explore inertia friction welding also referred to as spin welding where material held by two chucks are joined together. As you can see one end remains fixed and the rotary forces of the other create enough friction to bond the two ends by capitalizing on the irregularities of their surfaces. Otherwise said, the kinetic energy generated from the rotational motion of one end forces the exterior nuances of the mating surfaces to reach forging temperatures. The mating surfaces achieve enough thermal energy to bond the two ends. This technique is commonly used in the manufacturing of trailer axles, drive shafts and the aerospace industry. Linear friction welding, much like its spin welding counterpart fuses two surfaces together. However, instead of the axial rotation to generate kinetic energy to achieve forging temperatures used in spin welding. Linear motion is applied to the static end. As a result, linear friction welding is ideal for bonding blades to discs in turbine manufacturing. The benefits of spin and linear friction welding far outweigh those of traditional fusion welding. The biggest advantage is the cleaning effect the rotary and linear motion has on the mating surface. This is in contrast to fusion welding where oil, wax layers, paint or scale must be meticulously removed to ensure a reliable bond. As a result of this cleaning effect, there is no dependency on inert gases found in traditional welding. Regarded as the automotive industry standard, ultrasonic welding displaced the former technique of yesteryear. In the past, wire harness connections were spliced and held by terminals dipped in acid and dip-solder. This new standard no longer relies on terminals, acid, and dip-solder but yielded a stronger and reliable bond. Ultrasonic welding, much like spin welding leverages the irregularities in the surfaces of the bonding materials by creating friction via a horn that vibrates up to 20,000 times a second or 20 kHz by means of acoustic energy. It should be noted that 20 kHz is the upper limit of human hearing. The interaction of these surface inequities coupled with the high-frequency vibration generates enough thermal energy to bond the mating surfaces of the materials. The end result is a durable bond with no dependency on adhesives, fasteners or soldering materials. A prime advantage of ultrasonic welding is the ability to join molded plastic parts with leak proof joints. For this reason, it has become the application of choice in industries that rely on the bonding of plastics such as the medical industry. Items such as blood/gas filters, dialysis tubing, cardiometry reservoirs, IV catheters and filters, heart sensors for bypass patients, even textiles such as hospital gowns, face masks, and other sterile garments are manufactured utilizing ultrasonic welding. Friction stir welding has become the method of choice in the locomotive, aerospace and freight industry. Specifically in the production of carriages, fuselage, and containers in which aluminum body panels are seamlessly joined. Superseding antiquated fusion methods friction stir welding eliminates the dependency on consumables such as welding wire, shielding gas, rivets found in other methods. This solid-state process is regarded as environmentally friendly as there is no offing of gases from fumes, rods or wires and because there is no arc lighting, friction stir welding does not have the ocular hazards associated with traditional welding. Mechanical distortion is virtually non-existent due to low surface temperatures which result in high integrity joints with an excellent surface finish.

The Evolution Of Stealth Technology

5ji7H1PnuTo | 23 Feb 2019

The Evolution Of Stealth Technology

On January 17, 1991, at 2:30AM, the opening attack of Operation Desert Storm was set in motion. Tasked with crippling Iraq’s command and control, shipborne Tomahawk and B-52 launched AGM-86 cruise missiles were employed to infiltrate targets within Bagdad. Alongside this initial inrush of deep striking assets, was a new class of weapon. This attack was the first public debut of a stealth aircraft facing off against one of the largest air defense networks in the region. The iconic Lockheed F-117 Nighthawk, the first stealth strike platform, were among the first sorties to enter the heavily defended airspace of Baghdad. In order to understand how stealth technology emerged, we need to quantify what stealth is. Stealth isn’t one specific technology, but rather a mantra of design that incorporates low observability. Of all the potential threats faced by combat aircraft, detection and tracking by radar pose the most risk. Radar, infrared, and optical detection all rely on the bouncing of electromagnetic radiation of an object in order to gather information from it. However, radar differs in that the source of the reflected radiation is actively emitted from the observer. With passive methods of detection, the target’s own emissions or ambient light are used. While detecting an aircraft provides a warning, in order to defend against it, it must be tracked. Tracking is the determining of a target position, velocity, and heading from a reflected signal. While some defense weapons can track optically, infrared tracking is the predominant method for close range air defense due to its ability to see the heat of an aircraft through the atmosphere. Infrared tracking works by using an infrared sensor similar to a camera to "look" for the highly contrasting signature of hot jet exhaust gases and the warm aircraft body, against the ambient air temperature. The first military use of radar began around the start of world war 2. One of the more notable uses was in the air defense of England. By 1939 a chain of radar stations protected the East and South coast provided early warning to incoming aircraft. Fast forward to today and most military airborne radars, including those found in radar-based anti-air missiles, operate in the 5 cm - 1 cm wavelength microwave range as it provides a good compromise between range, resolution and antenna sizing. The goal of radar stealth is to both mask an aircraft from being detected, tracked and fired upon from a distance. This is known as beyond visual range engagement. As distances close, radar-based engagements may transition into IR based targeting either low to the ground, or among adversarial aircraft within a dogfight. Early attempts at stealth were based on observations of radar on existing designs. It was discovered early on that the shape of an aircraft determined its visibility to radar, its radar cross-section or RCS. Soviet mathematician and physicist Pyotr Ufimtsev published a paper titled Method of Edge Waves in the Physical Theory of Diffraction in the journal of the Moscow Institute for Radio Engineering. Ufimtsev’s conclusion was that the strength of the radar return from an object is related to its edge configuration, not its size. Astoundingly, the Soviet administration considered his work to have no significant military or economic value, allowing to be published internationally. During that time period, Lockheed’s elite Skunk Works design team was working on a stealth proof-of-concept demonstrator called Have Blue. The engineering team struggled with predicting stealthiness as the program they created to analyze radar cross-section called ECHO-1 failed to produce accurate results. Denys Overholser, a stealth engineer on the project had read Ufimtsev’s paper, realizing that he had created the mathematical theory and tools to do a finite analysis of radar reflection. Ufimtsev's work was incorporated into ECHO-1. The iconic early stealth looks was a direct byproduct of the computational limit of computers of the time, which limited ECHO-1’s ability to perform calculations on curved surfaces. Northrop began working on a technical demonstrator of its own, know as Tacit Blue. Tacit Blue attempted to demonstrate a series of then advanced technologies including forms of stealth that employed curved surfaces. During the late 1970s, momentum was building for the development of a deep penetrating stealth bomber. By 1979, the highly secretive Advanced Tactical Bomber program was started, under the code name "Aurora". With the success and dominance of the US stealth programs, the technology has worked its way into other applications such as the canceled Comanche RAH-66 reconnaissance helicopter, the Sea Shadow, the USS Zumwalt. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

Clever Uses Of Thermal Expansion

EiR1gl3Lix8 | 04 Feb 2019

Clever Uses Of Thermal Expansion

Clever Uses Of Thermal Expansion On any given day, we rely on dozens of hidden computers seamlessly integrated into our lives to function. The low cost, flexibility, and ease of rapid product development of embedded microprocessors have fundamentally changed how products and equipment are designed; finding their way into even the most trivial items. In this series, we explore how engineers accomplished design goals in a time long before the semiconductor revolution by spotlighting ideas that combined brilliant engineering with innovative uses of material properties. Thermal expansion is one of the more common physical phenomena we experience daily. Most materials expand when heated. When a material is heated, the kinetic energy of that material increases as its atoms and molecules move about more. At the atomic level, the material will take up more space due to its movement so it expands. THERMOSTATS Most vehicle engines operate best around the boiling point of water. Keeping the heat generated by combustion in thermal check is a liquid cooling system that flows coolant in a circuit between the engine and a radiator. Typically the cooling system capacity is large enough to cool the engine at all mode of its operation. But when a cold engine is first started, this cooling capacity becomes a hindrance, as it can overwhelm an engines ability to rapidly warm up to operating temperature. Thermostats are used to regulate this temperature. BIMETALLIC SWITCHES Mechanical control by thermal expansion is simple and very reliable but what if we need to perform a nonmechanical form of temperature based control, such as electrical switching. In a manner similar to wax, metals expand when heated, though different metals expand at different rates. This difference in expansion rates allows for some interesting applications. Bimetallic string bend when heated and can be configured into electrical thermal switches. FLASHERS We can expand on the functionality of bimetallic switches further by mounting an electrically resistant heating element to the bimetallic strip. As current flows through the heating element, the electrical resistance causes dissipation of heat, raising the temperature of the bimetallic strip. As it heats up, the thermal motion causes the bimetallic element to switch on the flow of electricity. Current is shunted away from the heating element, cooling it. The bimetallic strip then contracts back to its original state. This opens the switch, restoring current back to the heating element. This cycle of opening and closes forms a thermal flasher. COIL THERMOSTAT Bimetallic strips are durable, easily formed and can be used in various configurations. If we coil a bimetallic strip, the thermal motion causes the coil to tighten or unwind, creating rotation. If we calibrate the motion to the temperature of the bimetallic coil we create rotational motion relative to temperature. Add graduations and an indicator needle, and we now have a dial thermometer. This simple, purely mechanical mechanism not only allows for measuring temperature but also the ability to control it in an adjustable manner. This is how residential, non-electronic adjustable thermostats operate. THERMOCOUPLE Combining dissimilar metals for the purpose of temperature sensing also comes in other forms. When a junction between two different metals are formed, such as with the alloys chromel and alumel, the thermoelectric effect occurs. An electrical potential difference across the junction develops with the voltage changing in a temperature dependent manner. This is known as a thermocouple. Thermocouples are simple, rugged, inexpensive, and interchangeable. Though they aren’t precise, they are used as temperature sensors for both simple and digital control systems. Other industrial configurations of control by heat exist, though these methods are more integrated into systematic designs, that are impractical for direct electronic control, they employ thermodynamic properties of working fluids such as air, combustion gases, steam or molten salt and as are generally used for power generation or transmission. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Science Of Roundness

NjbvOTUSqdI | 12 Jan 2019

The Science Of Roundness

Every single one of the 3.5 trillion miles in the US are made possible by the hundreds of rotating parts that enable a vehicle to drive down the road. The performance of these parts is a direct result of the advancements in the science of roundness. If we take the fulcrum point of a lever and move it completely over to one end, duplicate it repeatedly, witch each copy sharing the same fulcrum; a new simple machine is formed - the wheel. Wheels allow us to multiple distances, speed, or force based on how much leverage we put on their center point. Wheels provide another characteristic that has been critical to industrial growth. The ability to reduce friction by transmitting forces at a single point. Roundness, along with size play critical roles in how parts are specified, designed and fitted. However, roundness diverts from the standard methods of defining dimensions such as length, area, and volume. Roundness is more of a relationship between dimensions and must be measured in a completely different manner. The measure of roundness, as well as other metrics of dimensionality, is known as metrology - the scientific study of measurement. The ability to verify the roundness of a part is absolutely critical to a component’s performance. In the intrinsic datum method, the datum points used for measurement are directly taken off the part and it contacts point with a reference surface. Typically a flat surface is used for a single datum measurement or a V block for a two datum measurement. A measurement device that measures the displacement of the surface, such as a dial indicator, is brought to the surface of the part and zeroed to a start point. As the part is rotated, deviations from roundness displace the measurement tool from zero, with surface peaks creating positive displacement and valleys negative ones. The solution to the limitations of the intrinsic datum method is extrinsic datum measurement. Extrinsic datum measurement is done by assigning a rotational axis datum to the part and aligning it the circular datum of a highly accurate rotating measuring fixture. The four common types of calculated references circles are: Least Square Reference Circle (LSC) Minimum Zone Circle (MZC) Minimum Circumscribed Circle (MCC) Maximum Inscribed Circle (MIC) The Least Square Reference Circle (LSC), the most commonly used reference circle, is a circle that equally divides the area between the inside and outside of the reference circle. A Minimum Zone Reference Circle (MZC) is derived by first calculating the smallest circle that can fit inside of the measured data. Then calculating the smallest circle that can encompass the measured data. The out-of-roundness is given by the radial separation between these two circles that enclose the data. A Minimum Circumscribed Reference Circle (MCC), sometimes known as the ring gauge reference circle and is the smallest circle that totally encloses the data. Out-of-roundness is quantified as the largest deviation from this circle. A Maximum Inscribed Reference Circle (MIC) is the largest circle that can be enclosed by the data. The out-of-roundness is quantified as the maximum deviation of the data from this circle. This is sometimes known as the Plug Gauge Reference Circle. When rotating parts are examined, especially by extrinsic measurement, harmonics of the part become a consideration. Irregularities that exist on a rotating part that happens rhythmically are known as undulations. In 2011, the International Committee for Weights and Measures spearheaded an effort to redefine the kilogram, moving it away from antiquated reference objects. One proposal, pushed by an international team called the Avogadro Project, aimed to define the kilogram in terms of a specific number of silicon atoms. In order to count the atoms of the large silicon-28 crystal, it was ground into a ball and its volume determined. Moving past man-made objects, let's look at the roundest object ever measured. In 2013, in an effort to study the distribution of charge around the electron, scientist at Harvard were able the measure the smallest roundness ever.

The Evolution Of CPU Processing Power Part 2: Rise Of The x86

kvDBJC_akyg | 12 Dec 2018

The Evolution Of CPU Processing Power Part 2: Rise Of The x86

SERIES LINK - https://www.youtube.com/playlist?list=PLC7a8fNahjQ8IkiD5f7blIYrro9oeIfJU In this multi-part series, we explore the evolution of the microprocessor and its astonishing growth in processing power over the decades. In Part 2, we learn about how the x86 architecture came to dominate the PC world through the trifecta of Intel, IBM, and Microsoft. As the 1970s progressed, CPU designs grew more robust. Faster clock speeds, larger address capacities, and more elaborate instructions sets were all being leveraged. The next major offering from Intel was the 8008. One of the more prominent additions to the 8008 feature list was the inclusion of indirect addressing. With direct addressing, a memory location is provided to an instruction, where it then fetches the data contents of that address location. In indirect addressing, the contents of that referenced memory location is actually a pointer to another location - where the data actually is. The 8008 also implemented mechanism known as interrupts. Interrupts allowed hardware signals and internal CPU events to pause program execution and jump to a small high priority region of code. Example of interrupt events could be a real-time clock signal, a trigger from a piece of external hardware such as a keyboard, or a change in the CPUs internal state. Even program code can trigger an interrupt. After the execution of the interrupt service code, the original program would resume. Nex next major Intel product was the 8080. The 8080 was the first in Intel’s product line to utilize an external bus controller. This support chip was responsible for interfacing with RAM, and other system hardware components. These communications are commonly referred to as input/output or IO. This allowed the CPU to interface with slower memory and IO, that operated on system clock speeds that were slower than the CPU’s clock speed. It also enhanced overall electrical noise immunity. The 8080 was considered by many the first truly usable microprocessor, however competing processor architectures were emerging. During the next few years, the rise of desktop computing was being dominated by the competing Zilog Z80 CPU, which ironically was an enhanced extension of Intel's own 8080 and was designed by former Intel engineer Federico Faggin. Intel’s counter to this was the release of the 8086. Keeping in line with the software-centric ethos, CPU support of higher level programming languages was enhanced by the addition of more robust stack instructions. In software design, commonly used pieces of code are structured into blocks called a subroutine. It may sometimes also be referred to as a function, procedure or a subprogram. To illustrate this, let's say we made a program that finds the average of thousands of pairs of numbers. To do this efficiently, we write a block of code that takes in two numbers, calculates their average and return it. Our program now goes through the list of number pairs, calling the subroutine to perform the calculation and returning the result back to the main program sequence. The stack is used to store and transport this data and return addresses for subroutine calls. The notable complexity of 8086 and its success had cemented Intel’s commitment to a key characteristic of its architecture - CISC or complex instruction set computer. Though a CISC architecture was used in the 8080 and its mildly enhanced successor the 8085, the 8086 marked Intel’s transition into the full-blown adoption of CISC architecture with its robust instruction set. With only a handful of CPU’s employing it, CISC architecture is a relatively rare design choice when compared to the dominant RISC or reduced instruction set computer architecture. Even today, the x86 CPU’s remain the only mainline processors that use a CISC instruction set. The difference between a RISC CPU and a CISC CPU lie within their respective instruction set and how its executed. RISC utilizes simple, primitive instructions while CISC employs robust, complex instructions. Aside from adopting CISC architecture, the performance penalty of accessing memory was also combated in new ways in the 8086. The 8086’s performance was further enhanced by the ability to make use of the 8087, a separate floating point math co-processor The success of the 8086 processors is synergistically linked to another runaway success in computing history. In the late 1970s, the new personal computer industry was dominated by the likes of Commodore, Atari, Apple, and the Tandy Corporation. With a projected annual growth of over 40% in the early 1980s, the personal computer market gained the attention of mainframe giant IBM lead to the launch of the IBM PC, which also paved the way for Microsoft’s dominance in the software industry, the IBM PC as the dominant personal computer, and the x86 and the primary architecture of PCs today. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind

The Evolution Of CPU Processing Power Part 1: The Mechanics Of A CPU

sK-49uz3lGg | 29 Nov 2018

The Evolution Of CPU Processing Power Part 1: The Mechanics Of A CPU

SERIES LINK - https://www.youtube.com/playlist?list=PLC7a8fNahjQ8IkiD5f7blIYrro9oeIfJU In this multi-part series, we explore the evolution of the microprocessor and its astonishing growth in processing power over the decades. In Part 1, we learn about the first commercial CPU, the Intel 4004 and examine how it and similar early CPU's work at the fundamental level. During the mid-1960s a revolution in miniaturization was kick-started. The idea of packing dozens of semiconductor-based transistors on to a single silicon chip spawned the integrated circuit. It laid the groundwork for a complete paradigm shift in how modern society would evolve. In March of 1971, the commercial launch of a new semiconductor product set the stage for this new era. Composed of a then-incredible 2,300 transistors, the Intel 4004 central processing unit or CPU was released. For comparison, ENIAC, the first electronic computer built just 25 years earlier could only execute 5,000 instructions a second. But what made the 4004 so powerful wasn’t just its 1800% increase in processing power - it only consumed 1 watt of electricity, was about ¾” long and cost $5 to produce in today’s money. This was miles ahead of ENIAC’s, cost of $5.5 million in today’s money, 180kW power consumption, and 27-ton weight. In order to understand how a CPU derives its processing power, let examine what a CPU actually does and how it interfaces with data. For all intents and purposes, we can think of a CPU as an instruction processing machine. They operate by looping through three basic steps, fetch, decode, and execute. As CPU designs evolve these three steps become dramatically more complicated and technologies are implemented that extend this core model of operation. FETCH In the fetch phase, the CPU loads the instruction it will be executing into itself. A CPU can be thought of as existing in an information bubble. It pulls instructions and data from outside of itself, performs operations within its own internal environment, and then returns data back. This data is typically stored in memory external of the CPU called Random Access Memory or (RAM). Software instructions and data are loaded into RAM from more permanent sources such as hard drives and flash memory. But at one point in history magnetic tape, punch cards, and even flip switches were used. BUS The mechanism by which data moves back and forth to RAM is called a bus. A bus can be thought of as a multi-lane highway between the CPU and RAM is which each bit of data has its own lane. But we also need to transmit the location of the data we’re requesting, so a second highway must be added to accommodate both the size of the data word and the address word. These are called the data bus and address bus respectively. In practice, these data and address lines are physical electrical connections between the CPU and RAM and often look exactly like a superhighway on a circuit board. REGISTER The address of the memory location to fetch is stored in the CPU, in a mechanism called a register. A register is a high-speed internal memory word that is used as a “notepad” by CPU operations. It’s typically used as a temporary data store for instructions but can also be assigned to vital CPU functions, such as keeping track of the current address being accessed in RAM. Because they are designed innately into the CPU’s hardware, most only have a handful of registers. Their word size is generally coupled to the CPU’s native architecture. DECODE Once an instruction is fetched the decode phase begins. In classic RISC architecture, one word of memory forms a complete instruction. This changes to a more elaborate method as CPUs evolve to complex instruction set architecture, which will be introduced in part 2 of this series. BRANCHING Branching occurs when an instruction causes a change in the program counter’s address. This causes the next fetch to occur at a new location in memory as oppose to the next sequential address. OPERAND Opcodes sometimes require data to perform its operation on. This part of an instruction is called an operand. Operands are bits piggybacked onto an instruction to be used as data. Let say we wanted to add 5 to a register. The binary representation of the number 5 would be embedded in the instruction and extracted by the decoder for the addition operation. EXECUTION In the execution phase, the now configured CPUs is triggered. This may occur in a single step or a series of steps depending on the opcode. CLOCKS In a CPU these 3 phases of operation loop continuously, workings its way through the instruction of the computer program loaded in memory. Gluing this looping machine together is a clock. A clock is a repeating pulse use to synchronize a CPU’s internal mechanics and its interface with external components. The CPU clock rate is measured by the number of pulses per second or Hertz. SUPPORT NEW MIND ON PATREON https://www.patreon.com/newmind