Category Science & Technology

Did the Milky Way shape-shift?

Embark on a journey to Verona and meet the dedicated volunteers preserving the legacy of one of Shakespeares greatest heroines

The House of Juliet also known as Casa di Gillette” is Italian, is more than just an old building in Verona Italy It is believed to have once belonged to the Cappello family which according to the legend inspired the famous Capulet family in English playwright William Shakespeare’s play Romen & haliet This is the very house where Gulietta Capuleti the supposed inspiration behind, the tragic heroine of Shakespeares play, is said to have livest

A global love connection

But the House of Juliet is not just a tounst attraction it is a hub of heartwarming connections from around the world Thousands of people lene letters addressed to Juliet. expressing their deepest feelings about love, relationship and life when they visit this medieval 13th Century palace Some letters are placest in a postbox at the house itself. while others are simply addressed to Juliet, Italy and sent from all corners of the globe

Juliet Club

The Juliet Club, a hidden gem tucked away in the backstreets of Verona since 1972 is a place where the timeless spirit of romance thrives. The dubs heart and soul lie in the dedicated committer of a dozen local women who affectionately call themselves The Secretaries of Juliet These volunteers take on the heartwanning task of responding to the staggering 50.000 letters Juliet receives annually. They diligently strive to answer every letter, even those written in languages beyond Italian or English, seeking local speakers to help bridge the communication gap. Stepping into the workroom fillest with boxes of handwritten letters, the secretaries embark on their mission to provide solace, wisdom, and advice on matters of the heart

This unique experience is not just limited to the dedicated team: anyone can be a part of Juliets legacy A visitor can drop in for a day and become Juliet’s secretary reading and responding to letters that resonate with their hearts. Each response is penned on official Club di Giulietta stationery and signed off as Secretary of Juliet.”

The first secretary of Juliet

The tradition of answering Juliet’s letters has a history dating back to the 1930s when the guardian of Juliets grave in Verona, Ettore Solimani, first began replying to letters left for the literary character. Handwritten letters have retained their allure, despite the prevalence of modem communication methods. The clubs archive stands as a treasure trove of countless love stories and a testament to the enduring power of love expressed through pen and paper

The Juliet Club and its Secretaries carry forward the legacy of Shakespeare’s iconic character, extending love hope, and empathy to countless hearts seeking solace and connection.

 

Did the Milky Way shape-shift?

For the longest time, astronomers have been trying to unlock the mystery surrounding our Milky Way galaxy. Astronomers have known that our galaxy looks like a spiral ever since the 1950s. Galaxies are classified based on their shapes and physical features. activity in their central regions, and so on. The presence of spiral anns in our galaxy has placed it in the category of spiral galaxies

What are spiral galaxies?

Galaxies are generally categorised as spiral (like our Milky Way). elliptical and irregular. Spiral galaxies have winding spiral arms. It makes them look like pinwheels and the cosmic entities comprise stars, gas, and dust.

Their spiral arms are composed of gas and dust from which bright younger stars are born. Stars are actively being formed in the spiral galaxies. The younger stars are formed in the arms that are rich with gas while the older stars occur in the halo, in the disk and within the bulge. And this is happening in our neighbouring galaxies as well The spiral galaxies are further grouped into normal spirals and barred spirals. A barred spiral galaxy has ribbons of stars, gas, and dust running across their centres. Our galaxy as well as Andromeda galaxy belong to the subtype of a barred spiral galaxy But here is a new spin on the story. New observations have shown that our galaxy was not always a spiral Reporting in the scientific journal Monthly Notices of the Royal Astronomical Society, astronomer Alister Graham observed that galaxies evolve from one shape to another. He used old and new observations to show how the evolution of galaxies from one shape to another takes place. This process is called galactic speciation. The clashes and subsequent mergers with other galades result in the process of cosmic evolution.

So our galaxy transformed from a dust-poor lenticular galway to the spiral galaxy we know it as today. In future, between 4 billion and 6 billion years, our galaxy is all poised to merge with its neighbouring galaxy, the Andromeda galaxy Following this collision and merger, the daughter galaxy resulted will be a dust rich lenticular galaxy, with an intact disk but without the spiral structure.

Picture Credit: Google

Why was Apple forced to switch to USB-C?

Shreyas Sen

Apple recently announced that it plans to adopt the USB-C connector for all four new iPhone 15 models, helping USB-C become the connector of choice of the electronics industry, nine years after its debut. The move puts Apple in compliance with European Union law requiring a single connector type for consumer devices.

USB-C is a small, versatile connector for mobile and portable devices like laptops, tablets and smartphones. It transfers data at high speeds. transmits video signals and delivers power to charge devices batteries. USB stands for Universal Serial Bus. The C refers to the third type, following types A and B.

The USB Implementers Forum, a consortium of over 1,000 companies that promote and support USB technology, developed the USB-C connector to replace the older USB connectors as well as other types of ports like HDMI, DisplayPort and VGA. The aim is to create a single, universal connector for a wide range of devices.

The key features and benefits of USB-C include a reversible connector that you can insert in either orientation. It also allows some cables to have the same connector on both ends for connecting between devices and connecting devices to chargers, unlike most earlier USB and Lightning cables.

USB-C’s widespread adoption in the electronics industry is likely to lead to a universal standard that reduces the need for multiple types of cables and adapters. Also, its slim and compact shape allows manufacturers to make thinner and lighter devices. USB-C refers to the physical connector. Connectors use a variety of data transfer protocols – sets of rules for formatting and handling data – such as the USB and Thunderbolt protocols.

The latest USB protocol, version 4, provides a data transfer rate of up to 40 gigabits per second, depending on the rating of the cable. The latest Thunderbolt, also on version 4, supports up to 40 gigabits-per-second data transfer and 100 watts charging. The newly announced Thunderbolt 5 will support up to 80 and 120 gigabits-per-second transfer and 140 to 240 watts power transfer over a USB-C connector.

Since its introduction in 2014, USB-C has gained widespread popularity and has already become the connector of choice for most non-Apple devices. Apple converted the iPad Pro to USB-C in 2018 and now is doing the same for the best selling Apple device, the iPhone.

Thanks to the industrywide adoption of USB-C, consumers soon won’t have to ask “Is this the right connector?” when they reach for a cable to charge or sync their portable devices. (This article is republished from The Conversation under a Creative Commons licence.)

Picture Credit: google

What do driverless cars use to determine the best route or course of action when travelling from one location to the next?

From finding the fastest path to a cafe to self-driving cars, modern necessities and benefits rely upon something that many take for granted: the Global Positioning System (GPS). GPS is so deeply ingrained into our daily lives that it’s difficult to picture a world without it, but did you know where it came from?

The origin of GPS

In the middle of the 1960s, the US Navy experimented with satellite navigation to follow U.S. submarines that were carrying nuclear weapons. The Department of Defence (DOD) decided to employ satellites to support their scheduled navigation system in the early 1970s because they wanted to make sure that it was a reliable, stable system, based on previous ideas from navy scientists. In 1978, the Department of Defence launched the first Navigation System with Timing and Ranging (NAVSTAR) satellite which later changed into GPS. In 1993, the 24-satellite constellation went into full functioning. It was initially intended to replace earlier navigation systems and locate military transportation equipment worldwide with accuracy. Over time, the GPS evolved into an easily available, free device that improves daily safety and comfort together

The Pioneers behind GPS

Despite being created by the U.S. Department of Defence, a few scientists have been recognized as having made significant contributions to this ground-breaking technology. Roger L. Easton led the Space Application division of the Naval Research Laboratory. Timing technology and circular orbits are two of the most important aspects of GPS that he specialised in as a Cold War scientist. As the first manager of the Navstar GPS programme, Brad Parkinson contributed to the program’s conception and early to mid-stage implementation. Dr. Ivan Getting was the founding president of The Aerospace Corporation and drove the GPS’s launch. To pinpoint their precise location, Dr. Gladys West first worked at the U.S. Naval Weapons Laboratory, where she calculated equations and analyzed satellite data.

How Does GPS Work?

Satellites, ground stations, and receivers make up the three components of the GPS. 13 satellites transmit radio signals that provide precise time and location derived from onboard atomic clocks. At a speed of 300,000 kilometres per second or the speed of light, these signals travel across space. The precise location of these satellites is verified by ground stations by receiving their signals. A computer, an atomic clock, and a radio are installed on every satellite. It continuously sends its position and time shifts since it recognizes the Earth’s orbit and the clock. The scientific use of the GPS is offering historically beyond-reach data in exceptional amounts and with extraordinary clarity. The movement of the polar ice sheets, the tectonic plates of Earth, and volcanic activity are all being measured by scientists using GPS. Ever wondered how birds find their way?

If you were lost in the middle of the woods and couldn’t see the sun, you might use a compass to figure out which way to go. For more than a thousand years, people have used magnetic compasses to navigate. But how do the other birds find their way?

The Earth’s magnetic field is recognized for shielding the planet and its people from risky cosmic rays and plasma emitted by the sun. However, birds use this magnetic field for navigation in a unique manner, similar to a GPS, and they can turn it on and off with great flexibility. Researchers have discovered two factors that are essential to a bird’s internal GPS: eyesight and scent. The perfume is unusual because we don’t typically associate birds with a sense of smell. The scent, it turns out, plays an important role in helping birds navigate. A bird can identify magnetic fields visually, allowing it to use a visual compass to navigate over long distances. Scientists have discovered a protein called cryptochromes in their retinas that enables signalling and sensing activities, assisting birds in navigating the great distances they travel while migrating.

Researchers detected a little magnetite area on the beaks of several birds. Magnetite is a magnetised rock that functions as a miniature GPS device for birds, providing information about its position relative to the Earth’s poles. Birds are considered to be able to navigate vast distances across places with few landmarks, such as the ocean, by using both beak magnetite and eye sensors.

Picture: Credit Google

Why is google’s default’ status in trouble?

Riding the tide of in-built advantage

When we buy a new smartphone, It usually comes loaded with Google apps, including Chrome, YouTube, and Gmail among others. Turns out Alphabet-the company that owns Google-pays phone makers millions of dollars to make Google the default search engine in their gadgets. Google’s competitors are upset about this arrangement. It is no wonder, as most of us do not care to go to the settings and swap Google out for Bing or DuckDuckGo.

The issue sparked concerns of unfair trade practices in the U.S. and the Justice Department there filed a case in December 2020. The case was filed by the Attorney Generals of eleven States, as they felt Google was acting like a monopoly. Currently, the case has turned into the largest antitrust trial that the U.S. has witnessed in the Past 25 years. District Judge Amit Mehta’s decisions can impact the way all of us use the most popular search engine in the world-even in India.

Is the search engine business a monopoly?

The US. court is investigating if Google is running an illegal monopoly in the search engine business. Google’s search engine has earned a huge market share ever since it started presenting people with helpful information culled from billions of websites that have been indexed since former Stanford University graduate students Larry Page and Sergey Brin developed the technology during the late 1990s.

Today, Alphabet’s market value is around a whopping $1.37 trillion. However, there are several other companies in the fray of the search engine business, though most of them may be unheard of and unknown to large sections of internet users. According to statistics, of all internet users, a massive 91.85 percent use Google. Next comes Bing, from Windows, with just 3.01 of percent people using it. The remaining 5 percent of users use one of the many other companies, such as Yahoo, Yandex, Baidu, DuckDuckGo.

If the U.S. court rules against Google, it could open the market up for new online avenues for consumers and businesses to explore in pursuit of information, entertainment and commerce. This may end up improving the quality of online services for consumers.

Picture Credit : Google

What is the origin of barcode?

 

It has been 50 years since barcode, a series of parallel bars or lines of varying width printed on various products, was invented. Over the years, the barcode has transformed the way the retail industry functions globally. It is now used to speed supermarket checkout lines, parcel deliveries. Airline check in, etc.

Origin

The barcode was invented by Drexel University students Norman Joseph Woodland and Bernard Silver in 1948 and patented in 1952. However, the first barcode was drawn in sand in Miami Beach, U.S. by Woodland, decades before technology could bring his vision to life.

The incident that led to the invention of this technology was when a local food chain store owner in Philadelphia requested the dean of then Drexel Institute of Technology (now Drexel University) to come up with a way to get shoppers through the billing faster. Though the dean shrugged it off. Bernard Silver and Woodland teamed up to develop a solution.

The first barcode was called Bull’s Eye barcode, a series of concentric circles. It was a linear representation of Morse code, the well-known character-encoding scheme in telecommunications, defined by dots and dashes. However, the idea could not be developed into a system due to expensive laser and computing technology.

Later, US engineer George Laurer implemented Woodland’s idea using less expensive laser and computing technology. He developed a rectangular scanner with strips called the Universal Product Code.

On April 3, 1973 big retailers and food companies agreed to use barcode to identify products. On June 26 in 1974, the barcode technology was used for the first time in the US. State of Ohio to scan a pack of chewing gum. The gum is now in the National Museum of American History in Washington.

The original barcode carried an 11-digit formula-six identifying the manufacturer and five identifying the product a 12th digit was added later as a check.

How do they work?

The bars are black strips on a white background. Their width and numbers are, however, different on each product. The bars are used to represent the binary digits 0 and 1 sequences of which represent numbers from 0 to 9 and be processed by a digital computer. Barcodes display the printed 12-digit number typically underneath the product as a backup in case of possible complications.

Barcode scanners use an incandescent light bulb or laser to shine light through the barcode. While the black lines on the barcode absorb light, the white parts shine through and get reflected. While scanning a barcode, the amount of light is detected, which then gets translated into a set of digits or data. Information can be retrieved from a computer database using this data.

Problems

While barcodes have indeed revolutionised the way of registering and selling products, there are several problems as well. With barcodes, there is high probability of misreading the product due to misorientation, obstruction by dirt, mist, protrusions, and damage. Besides, the barcodes can be scanned only from a particular distance – one metre. Also, barcode scanners are delicate and expensive

It has been 50 years since barcode, a series of parallel bars or lines of varying width printed on various products, was invented. Over the years, the barcode has transformed the way the retail industry functions globally. It is now used to speed supermarket checkout lines, parcel deliveries. Airline check in, etc.

Origin

The barcode was invented by Drexel University students Norman Joseph Woodland and Bernard Silver in 1948 and patented in 1952. However, the first barcode was drawn in sand in Miami Beach, U.S. by Woodland, decades before technology could bring his vision to life.

The incident that led to the invention of this technology was when a local food chain store owner in Philadelphia requested the dean of then Drexel Institute of Technology (now Drexel University) to come up with a way to get shoppers through the billing faster. Though the dean shrugged it off. Bernard Silver and Woodland teamed up to develop a solution.

The first barcode was called Bull’s Eye barcode, a series of concentric circles. It was a linear representation of Morse code, the well-known character-encoding scheme in telecommunications, defined by dots and dashes. However, the idea could not be developed into a system due to expensive laser and computing technology.

Later, US engineer George Laurer implemented Woodland’s idea using less expensive laser and computing technology. He developed a rectangular scanner with strips called the Universal Product Code.

On April 3, 1973 big retailers and food companies agreed to use barcode to identify products. On June 26 in 1974, the barcode technology was used for the first time in the US. State of Ohio to scan a pack of chewing gum. The gum is now in the National Museum of American History in Washington.

The original barcode carried an 11-digit formula-six identifying the manufacturer and five identifying the product a 12th digit was added later as a check.

How do they work?

The bars are black strips on a white background. Their width and numbers are, however, different on each product. The bars are used to represent the binary digits 0 and 1 sequences of which represent numbers from 0 to 9 and be processed by a digital computer. Barcodes display the printed 12-digit number typically underneath the product as a backup in case of possible complications.

Barcode scanners use an incandescent light bulb or laser to shine light through the barcode. While the black lines on the barcode absorb light, the white parts shine through and get reflected. While scanning a barcode, the amount of light is detected, which then gets translated into a set of digits or data. Information can be retrieved from a computer database using this data.

Problems

While barcodes have indeed revolutionised the way of registering and selling products, there are several problems as well. With barcodes, there is high probability of misreading the product due to misorientation, obstruction by dirt, mist, protrusions, and damage. Besides, the barcodes can be scanned only from a particular distance – one metre. Also, barcode scanners are delicate and expensive.

 

Picture Credit : google

 

What is a 3D printed robotic hands?

Researchers have succeeded in printing robotic hands with bones, ligaments and tendons for the first time. Using a new laser scanning technique, the new technology enables the use of different polymers.

Additive manufacturing or 3D printing is the construction of a 3D object from a 3D digital model. The technology behind this has been advancing at great pace and the number of materials that can be used have also expanded reasonably. Until now, 3D printing was limited to fast-curing plastics. The use of slow-curing plastics has now been made possible thanks to a technology developed by researchers at ETH Zurich and a MIT spin-off U.S. start-up, Inhabit. This has resulted in successfully 3D printing robotic hands with bones, ligaments and tendons. The researchers from Switzerland and the U.S. have jointly published the technology and their applications in the journal Nature.

Return to original state

 In addition to their elastic properties that enable the creation of delicate structures and parts with cavities as required, the slow-curing thiolene polymers also return to their original state much faster after bending, making them ideal for the likes of ligaments in robotic hands.

The stiffness of thiolenes can also be fine-tuned as per our requirements to create soft robots. These soft robots will not only be better-suited to work with humans, but will also be more adept at handling delicate and fragile goods.

Scanning, not scraping

In 3D printers, objects are typically produced layer by layer. This means that a nozzle deposits a given material in viscous form and a UV lamp then cures each layer immediately. This method requires a device that scrapes off surface irregularities after each curing step.

While this works for fast-curing plastics, it would fail with slow-curing polymers like thiolenes and epoxies as they would merely gum up the scraper. The researchers involved therefore developed a 3D printing technology that took into account the unevenness when printing the next layer, rather than smoothing out uneven layers. They achieved this using a 3D laser scanner that checked each printed layer for irregularities immediately.

This advancement in 3D printing technology would provide much-needed advantages as the resulting objects not only have better elastic properties, but are also more robust and durable. Combining soft, elastic, and rigid materials would also become much more simpler with this technology.

Picture Credit : Google 

The future of computing?

A computer that is powered by human brain cells, thereby extending the capabilities of modern computing exponentially and creating novel fields of study. No, this isn’t a one-line plot of a science-fiction. Researchers from Johns Hopkins University expect such ‘biocomputers’ to be developed within our lifetimes.

Organoid intelligence

While computing and artificial intelligence have been driving the tech revolution, it is nearing its peak. Biocomputing aims at compacting computational power and increasing efficiency in order to push past current tech limitations. A team of researchers outlined their plan for “organoid intelligence” in the journal Frontiers in Science in February 2023.

Scientists have used tiny organoids, lab grown tissue resembling fully grown organs, to experiment on organs without resorting to human or animal testing for nearly 20 years. Recently, researchers have started working on brain organoids.

Our brain remains unmatched by modem computers. While recent supercomputers have exceeded the computational capacity of a single human brain for the first time, it has been achieved by using a million times more energy.

Light on energy demands

A futuristic computer with biological hardware or brain organoids might be able to provide superior computing with limited energy consumption. Even though it may take decades to have an operational organoid intelligence that can power a system as smart as a mouse, researchers believe that setting along that path now is important. This, they believe, will create funding programmes that will help scale up production of brain organoids and have them trained using artificial intelligence Apart from the computational capabilities, organoid intelligence might also be a game-changer in drug testing, especially research pertaining to neurodevelopmental disorders and neurodegeneration.

The team working on organoid intelligence comprises scientists, bioethicists, and members of the public. This diverse consortium is an attempt to have varying opinions while assessing the ethical implications of working with organoid intelligence.

Picture Credit : Google

What are some examples of things written about in science fiction that became real?

Battle tanks, debit/credit cards headphones, bionic parts……… many of the machines and gadgets we use today were predicted by sci-fi authors long ago. Let’s look at a few of them that have become a reality

Debit/Credit Cards

Edward Bellamy’s 1888 novel ‘Looking Backward’ was a huge success in its day, but it is best remembered for introducing the concept of ‘universal credit’. Citizens of his future utopia carry a card that allows them to spend ‘credit’ from a central bank on goods and services without paper money changing hands.

Battle tanks

One of the best-known science fiction writers of the 20th century was H.G. Wells. In his 1903 story ‘The Land Ironclads’, published in the ‘Strand’ magazine, Wells described war machines that were uncannily similar to the modern tank They were approximately 100 feet long and rolled on eight pairs of wheels, each of which had its own independent turning axle. A conning tower in the top let the captain survey the scene. The first battle tanks were deployed on the battlefield a mere 13 years later, during the Battle of the Somme in World War I, and have been an integral part of every country’s armed forces ever since.

In ‘When the Sleeper Wakes’ (1899), Wells describes automatic motion-sensing doors which saw reality 60 years later.

Earbud headphones

When Ray Bradbury published his classic ‘Fahrenheit’ 451 in 1953, portable audio players were a reality. However, headphones were massive and ugly-looking. That’s why his description of ‘seashells’ and thimble radios that brought an electronic ocean of sound, of music and talk is so amazing. He exactly describes the earbud headphone and Bluetooth, which didn’t come into popular use till 2000!

Video chat

The first demonstration of video conferencing came at the 1964 New York World’s Fair, where AT&T wowed crowds with its ‘picturephone’. The technology has come a long way since then, but the first description of video phones came in Hugo Gernsback’s serial tale Ralph 124c 41+ in Modern Electrics magazine in 1911. In it, he described a device called the ‘telephot’ that let people see each other while speaking long distance.

Internet glasses

The protagonist in Charles Stross’ 2005 book Accelerando, carries his data and his memories in a pair of glasses connected to the Internet. In 2013, Google came out with a wearable computer called Google Glass fitted to spectacles frames. Wearers could access the Internet using voice commands.

All in one novel

Stand on Zanzibar, a 1968 dystopian* novel by John Brunner which won a number of sci-fi book awards, makes several technological and political predictions. These include laser printers, satellite TV, electric cars and on-demand video broadcasts.

Bionic man

Martin Caidin’s 1972 book ‘Cyborg’ is the story of astronaut-turned-test pilot Steve Austin who is severely injured in a plane crash. The government engages a doctor who is researching bionics or the replacement of human body parts with mechanical prosthetics that work almost as well as the original. Cochlear implants for the deaf and artificial hearts are successful modern applications of bionics.

*dystopian-pessimistic description of a society that breaks down. Its opposite is ‘utopian’.

Picture Credit : Google

How do hearing aids work?

A hearing aid, which consists of a microphone, amplifier, and speaker, makes sound louder for the user.

A hearing aid is a small electronic or digital medical device designed to help people who are hard of hearing. It makes sound louder for the user.

A hearing aid basically consists of three parts- a microphone, amplifier, and speaker. The microphone collects the sounds from the user’s environment and converts the sound waves into electrical (or digital) signals. The amplifier magnifies the power of the signals and then sends them to the inner ear through a speaker.

Those with a hearing disability have damaged hair cells in the inner ear. The surviving hair cells detect the sound vibrations magnified by the hearing aid and transmit them to the brain. However, if the hair cells are too damaged, then a hearing aid may be ineffective.

Hearing aids are available in various styles. The most common ones known as behind-the-ear (BTE) aids, consist of plastic cases worn behind the ear, which contain the electronic parts. The cases are connected with a narrow tube to the earmold which is inserted inside the ear. Smaller hearing aids in the form of earmolds that fit in snugly inside the ear are almost invisible to others like in-the-ear (ITE), in-the-canal (ITC) and completely-in-canal (CIC) aids.

Picture Credit : Google 

A robot chef that learns from videos

You might not often think about it that way, but cooking is a difficult skill with a number of factors in play. Just ask a robot! While human beings can learn to cook through observation, the same cannot be done easily by a robot. Programming a robot that can make a variety of dishes is not only costly, but also time-consuming.

A group of researchers from the University of Cambridge have programmed their robotic chef with a cookbook – eight simple salad recipes. The robot was not only able to identify which recipe was being prepared after watching a video of a human demonstrating it, but was also then able to make it. The results were reported in the journal ‘IEEE Access.’

Simple salads

For this study, the researchers started off by devising eight simple salad recipes and then made videos of themselves making these. A publicly available neural network programmed to identify a range of different objects was then used to train the robot chef.

The robot watched 16 videos and was able to recognise the correct recipe 93% of the time (15 times out of 16), even though it detected only 83% of the actions of the human chef in the video. The robot was able to recognise that slight variations (portions or human error) were just that, and not a new recipe. It even recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.

Hold it up for them

The researchers were amazed at the amount of nuance that the robot could grasp. For the robot to identify, the demonstrators had to hold up the fruit or vegetable so that the robot could see the whole fruit or vegetable, before it was chopped.

These videos, however, were nowhere like the food videos with fast cuts and visual effects that trend on social media. While these are too hard for a robot to follow at the moment, researchers believe that robot chefs will get better and faster at identifying ingredients in videos like those with time, thereby becoming capable of learning a range of recipes quickly.

Picture Credit : Google 

Sci-fi novels that shaped reality

Science fiction (sci-fi) and scientific innovation have been intertwined since the creation of this genre. Here are five marvellous inventions that were inspired by sci-fi.

The Taser

The Taser stun gun is a hand-held electrical non-lethal weapon used by police and law enforcement officers around the world. Invented by Jack Cover, an American aerospace scientist in the 1960s-70s, this device takes inspiration from English writer Victor Appleton’s young adult sci-fi novel ‘Tom Swift and His Electric Rifle (1911)’. The purpose of creating this device was to provide an alternative to the firearms that the air marshals were supposed to carry and use in case of a hijack. This invention was a solution to the concern that firing a gun on a plane could damage important and sophisticated machinery or pierce the fuselage.

Cover’s invention pays homage to Appleton’s book, and its name TASER is an acronym for Thomas A. Swift’s Electric Rifle. The scientist is said to have added the ‘A’ to make the word easier to pronounce.

Liquid-fuelled rockets

The idea of rockets, space travel, and exploration might not sound exceptionally futuristic today, but for 16-year-old Robert H. Goddard coming across this idea for the first time in English novelist H.G. Wells’s ‘The War of the Worlds’ (1898) was something right out of a dream. The famed father of rocketry invented and launched the world’s first liquid-fuelled rocket in 1926, making space travel a reality. A few years after this momentous event, the NASA physicist penned a letter to Wells elaborating on the “deep impression” his novel made on the American inventor and motivated him to take this journey “aiming at the stars” both literally and figuratively.

World Wide Web

Millions of people across the globe use the World Wide Web every day. They access it through computers, phones and other digital devices. From ordering food to sharing one’s location or some news and pictures with others, we use the Web all the time.

The first proposal for the World Wide Web was written by Tim Berners-Lee in 1989. Talking about the motivation behind this invention, the English computer scientist said, “I believe If you connect people up and you take away the national boundaries and you just leave humanity connected, it will naturally become better.” He also credited Arthur C. Clarke’s short story ‘Dial F’ for Frankenstein as the inspiration behind the World Wide Web. Acknowledging the impact of his story on Berners-Lee, the English sci-fi writer declared, “I guess I am the godfather of the World Wide Web.”

Humanoid robots ASTRO BOY

Japan’s Tomotaka Takahashi is one of the world’s leading new-generation robot scientists. In 2013, his humanoid robot named Kirobo became the world’s first talking robot sent into space to keep astronauts company. Talking about his passion for robotics in an interview, Takahashi said “When I was about six, I started reading the manga comic ‘Astro Boy’ after finding it lying around at home. My dream, from that moment on, was to become a robot scientist. I made my first robot around the same time, from a soapbox and duct tape complete with a robot face.” Osamu Tezuka’s ‘Astro Boy’ is a manga series that ran from 1952 to 1968. It chronicles the adventures of the titular humanoid. The Japanese scientist also admitted that his 13-inch-tall robot Kirobo’s design and colour palette are heavily inspired by the friendly manga character.

Helicopter

Since the beginning of time, the idea of flying from one place to another in little time has been a fascinating topic. The helicopter is one of the many inventions that aimed to accomplish this. Russian-American aviator Igor Sikorsky is credited with inventing the modern helicopter. As a child, his parents exposed him to the technical drawings of da Vinci and encouraged him to pursue science. As a curious kid growing up in Russia, he was fascinated by Jules Verne’s 1886 novel ‘Robur the Conqueror (which is also known as The Chipper of the Clouds)’. This book’s description of a flying machine called the ‘Albatross’ inspired Sikorsky’s design of the helicopter. Starting test flights in 1939, Sikorsky’s aircraft was ready for larger production by 1942.

Picture Credit : Google 

Science and tech to nature’s rescue

Of course, human technology can never completely replace nature. But, along with science, technology can help our world in several ways.

It is easy to presume our planet will recover with gentle human care alone. But, in reality, it would require a lot of supprt from various other quarters as well. For instance, science and technology. These two areas play a huge role in keeping our natural world going-now more than ever as we grapple with climate change.

Of course, human technology can never completely replace nature. But, along with science, technology can help our world in several ways. We require the science of data gathering simply to understand where we stand today – be it assessing the number of wildlife lost to wildfires in a region or the amount of glacial ice a mountain is losing every year or the next eruption of a dormant volcano. As for technology, everything from as simple as a camera trap to advanced mechanisms such as Geographic Information Systems (GIS) can help us track wildlife, crucial for conservation measures.

Data gathering and tracking wildlife are among the many ways in which science and technology help. If technically advanced systems can alert officials concerned about poaching or illegal tree-felling real-time, they can go a long way in grave loss. And, tools such as social media are powerful enough to cause positive changes through information sharing and collective demand for action.

As we start to run out of time to save our planet, it is imperative that we dip into every possible resource available to us, and keep working on improving such resources for the future too.

Picture Credit : Google 

What is a lie-detector?

A lie-detector test does not conclusively prove that the person is being untruthful and as such the results of this test are not treated as evidence in Indian courts.

It is a device often used during criminal investigations for questioning suspects. But how does it work?

A lie-detector or polygraph is a device that monitors a person’s involuntary physiological reactions when he or she is questioned about a certain event. The instrument tries to find out if the person is trying to conceal something. It is often used during criminal investigations for questioning suspects. A lie-detector is essentially a combination of a variety of medical devices that monitor changes occurring in the body during questioning. The examiner looks for important reflex actions of the body when the person is subjected to stress, by monitoring fluctuations in heart rate, blood pressure, respiratory rate, etc.

Based on these indications, the examiner can at best interpret if the person is being deceptive. But a lie-detector test does not conclusively prove that the person is being untruthful and as such the results of this test are not admissible in Indian courts.

Picture Credit : Google 

How does Google Maps work?

You might have used or seen your parents use Google Maps while travelling around the city. Have you ever wondered how it works? Find out…

Google Maps has revolutionised travel like never before. Travellers can chalk out their itineraries and find addresses at the dick of a button with the help of this free map service. You can virtually view the street your cousin lives on in the U.S. without even moving an inch from your seat!

Google bases its maps on information taken from a selection of databases. But the most crucial data is provided by the satellite images of cities, which is captured and converted into small image files. The data is then verified with a vast database of map references like longitude and latitude co-ordinates, addresses and postal codes.

When you type an address in the search field, Google sends the query to its global servers and searches for the closest location match. The search results in the corresponding map of the location being displayed on screen, When you ask for directions from Location A to B. Google sorts out the information in its map servers, which store millions of potential route combinations to find the fastest route between the two locations. This kind of system which deals with information related to location is called Geographical Geographical Information System (GIS).

Google Maps first started as a software application developed by the Danish-born Rasmussen brothers Lars and Jens for a company that was later acquired by Google.

Picture Credit : Google 

What is Metaverse?

The latest buzzword in internet circles is ‘Metaverse’! It is making headlines, especially with Facebook even rebranding itself as Meta! It is expected to create a major impact in the digital world.

What is Metaverse?

Put simply, the metaverse is a 3D (three-dimensional) version of the internet. It can be considered a place parallel to the physical world, where you spend your digital life. In the metaverse, you and others in it will have an avatar. You will interact with each other through avatars. It is a shared virtual space, which is interactive and has an immersive experience.

Let’s look at some examples. You may have used the metaverse in some form or the other while playing video games. A basic form of the metaverse has been adopted in the online shooter game Fortnite, where gamers have their own personal avatars to engage with the avatars of other players.

In the stimulation video game Second Life, users experience virtual reality in which their avatars can do everything they can in real life, including eating, sleeping, shopping, etc.

The term ‘metaverse’, first cropped up in the science fiction novel “Snow Crash” by Neal Stephenson in 1992. In the book, the author referred to the metaverse as an all-encompassing digital world that exists parallel to the real world.

Tools needed

You will need a VR (Virtual Reality) headset, a controller and a powerful laptop to enter the metaverse. You will also need digital currency to live in the metaverse.

Future impact

The metaverse will make gaming more realistic and increase the user’s immersive experience. Travelling around the world without leaving your room will become possible. Healthcare and education are expected to gain the most from the metaverse. The metaverse has the potential to radically transform the digital and global economy.

Currently, there is no single metaverse but there are many. All of them are, however, still under development.

Picture Credit : Google 

Where is world’s Largest Solar Tree?

The largest solar tree in the world has been installed at the CSIR-CMERI Centre of Excellence for Farm Machinery in Ludhiana, Punjab.

A solar power tree is a device that is shaped like a tree with its steel branches holding the solar photovoltaic panels.

Just like a natural tree, the steel branches of the solar tree are arranged in such a fashion that every solar panel is properly exposed to the Sun. Moreover, the panels can be mechanically tilted east or west to derive maximum benefit of the Sun’s position. The height of the tree is about 9-10 metres. One tree can produce about 5kW of power.

One of the main hurdles in installing solar power plants is the lack of availability of large spaces. Often, farmers are reluctant to sacrifice their cultivable land for solar power production. But a solar tree with its vertically arranged branches, occupies only four sqft of area, leaving almost the entire land free for cultivation. The energy generated can be used to run pumps, e-tractors and tillers as a green alternative to diesel.

India’s first solar power tree was produced by Central Mechanical Engineering and Research Institute (CMERI), at Durgapur. The largest solar tree in the world has been installed at the CSIR-CMERI Centre of Excellence for Farm Machinery in Ludhiana. Its total solar PV panel surface area is 309.83 m2. CMERI hopes to install many such solar trees along highways and farmlands.

Picture Credit : Google 

What is the device of Ulta Chaata used for?

Ulta Chaata, a concave structure, collects rainwater in the monsoon and converts it into potable water. Find out how it’s done.

Ulta Chaata, as the name suggests, resembles an inverted umbrella. It is a large concave structure that collects rainwater in the monsoon and converts it into potable water, while the solar panels fitted alongside the canopy, produce energy in the dry season.

The rainwater collected in the bowl of the Chaata, trickles down the stalk to reach a filtering unit of activated carbon where it is cleared of impurities. A duster of ten or more Ulta Chaatas is connected to a common device where the water undergoes further filtration to remove microorganisms, making it fit for drinking. A single unit can harvest as much as 100,000 litres of water every year.

The solar energy harnessed in the dry season is stored in the battery and is used not just to light up the Chaata, but also the premises. Unlike a typical rainwater harvesting unit, Ulta Chaata’s attractive design lends itself well to the aesthetics of the surroundings, especially when lit up.

The device takes up to one sq. ft of area. Ulta Chaatas can be installed as sustainable workstations in open spaces. They can provide a green roof for reception areas, cafeterias, gazebos, car parks, bus stops and even railway stations.

Besides a number of corporates, Guntakal railway station in Andhra  Pradesh has installed six such structures on its premises.

Ulta Chaata is the brainchild of a Mumbai-based environmentally conscious couple Priya Vakil and Samit Choksy whose start-up ThinkPhi designs sustainable products.

Ulta Chaatas can be installed as sustainable workstations in open spaces. They can provide a green roof for reception areas, cafeterias, car parks, bus stops and even railway stations.

QUICK FACTS

  • A single unit of Ulta Chaata can harvest as much as 100,000. litres of water every year.
  •  Ulta Chaata’s attractive design lends itself to the aesthetics of the surroundings.
  • The solar panels fitted alongside the canopy produce energy in the dry season.

Picture Credit : Google 

What’s next in the smart ecosystem?

While many facets of life already have a smart counterpart, colour-changing fibres could be a gamechanger in the wearables market.

We live in a world of smart devices. It wasn’t always the case though. There’s been an eruption of sorts in the last couple of decades as there is an attempt to make every conceivable device now into a smart gadget.

It all started with the proliferation of smartphones. With each of us holding onto one of these almost all the time, it was a matter of time before the manufacturers wanted to put more smart gadgets in our control.

It was in such a climate that household appliances such as televisions, refrigerators, and even washing machines started becoming smarter. With smart bulbs, speakers, and devices to control the entire ecosystem, many facets of life now have a smart counterpart.

Colour-changing fibres

If you had ever wondered what could be next in the smart ecosystem, you might be surprised to know the answer. Researchers from the University of Luxembourg have come up with colour-changing fibres that could well pave the way for… you guessed it, smart clothes! Their results were published in Nature Materials in September 2022.

Up until now, clothing has mainly been about covering our body, protecting it from the environment, and maybe even flaunting our style. The future, however, could see clothing become part of the wearable technology bandwagon.

Remains mechanoresponsive

The researchers used Cholesteric Liquid Crystal Elastomer (CLCE), a structurally coloured polymer system that is capable of changing its colour by mechanical deformation. They then developed a simple, scalable method of creating colour-changing CLCE fibres that can be sewn into the fabric easily. The colour of fibres spanned the entire visible colour spectrum and showed excellent mechanochromic response- changing colour continuously and reversibly upon stretching or other mechanical movements.

The team were able to demonstrate the robustness of the CLCE fibres in garments by subjecting it to repeated stretching, machine washing, and abrasion. In addition to its ability to survive long-term use, the fact that it can be woven or sewn into elastic garments, and that it might not impair user comfort, implies that these can be used as smart textiles.

Apart from numerous applications in wearable technology, innovative fashion, and artistic applications, the researchers believe that it might be particularly useful in sports clothing and wearable robotics. It might even come in handy in non-wearable contexts too, in terms of strain sensing (think ropes incorporated with these fibres) and deformation detection. Becoming mainstream might be some distance in the future, by which time “dressing smart might take a whole new meaning.

Picture Credit : Google 

How does a driveless car move?

Self-driving cars are loaded with advanced technology that can sense their environment.

The concept of a driverless car has leapt out of the pages of science fiction with major auto-makers working to make them a reality. So far, driverless cars have logged millions of kilometres in test runs and are steadily becoming a reality despite the many hurdles still to be overcome.

Self-driving cars are loaded with advanced technology like radar, lidar, GPS, cameras, and laser scanners that can sense their environment. The control systems in the car evaluate the sensory information about obstacles, road signs, traffic signals and other cars on the road to chart out a navigable path to the destination. The car’s computers accelerate, cruise at 120 kph, slow down, brake and pass without the human driver even touching the steering wheel or gear shift.

In December 2020, Waymo (formerly known as the Google self-driving car project) became the first service provider to offer driver-less taxi rides to the general public, in a part of Phoenix, Arizona, USA. While Honda has launched its self-driving car in Japan, Mercedes-Benz is in the process of doing so.

Picture Credit : Google 

Why is 3D printing important for the future?

3D printing upends the standard manufacturing process. There is no doubt that 3D printing is the future and that we may be able to ‘create’ everything, including organs.

The wheel is one of the earliest inventions of mankind. It revolutionised our life. Going forward, we have made many strides in varied sectors to change life as we know it. Enter 3D Printing, And now, we are in the throes of changing the way matter is being looked at and processed.

With 3D printing, life has become easy and different. You can now print what you want. From hobbyists to businesses everyone is using 3D printing. 3D printing upends the standard manufacturing process

What’s 3D Printing

3D printing is three-dimensional printing and manufacturing of products, and it is an additive manufacturing process.

Additive manufacturing is the process of creating an object by building layers. This is in contravention to subtractive manufacturing where the end product is created by removing or cutting away matter from a solid block of material.

Since 3D printing is done by adding material and building layers, the layering goes on for n number of times until the end product is realised.

Application of 3D printing

There is no doubt that 3D printing is the future and that we may be able to ‘create’ everything, including organs. While the manufacturing and construction industry has been seeing a lot of applications of 3D printing, other areas such as the medical industry, food, packaging, and arms industry are also being revolutionised by 3D printing technology.

The cost-effectiveness, ease of manufacturing, ability to make complicated parts, and less waste generation are just some of the aspects in favour of 3D printing. From plastic models to steel parts and surgical implants are manufactured through 3D printing.

3D bioprinting is the process of applying 3D printing to produce tissues and organs. So imagine this scenario. Instead of waiting for a donor, what if we can just print the organs using the cells? We are still tiptoeing on bioprinting. Recently, a woman had her external ear reconstructed using a 3D-printed living tissue implant. The transplant was used on a woman from Mexico and was carried out in March in the US. The woman was born with a small right ear.

Over the years the construction industry has seen a lot of strides. Commercial buildings and houses have been created using 3D printing. The first 3D-printed bridge came up in Castilla-La Mancha Park in Alcobendas, Madrid.

Picture Credit : Google 

What is a Companion distraction-free smartphone?

Companion is a distraction-free smartphone that has the basic functions of a smartphone, but is stripped of everything that could distract you. It can be worn as a necklace, clipped to belt loops, or wherever preferred. Its small, minimalist design features an E-ink display that won’t tempt users with notifications throughout the day. It has an earpiece and microphone for taking calls, a wide + ultra-wide camera lens with flash, and an air quality sensor. It has no ports, thus freeing up internal space and making the device more water resistant, and it is charged wirelessly. Companion is constructed from a bioplastic that is easily manufactured and disassembled for repair or dismantled at end-of-life. This helps create a closed loop system where plastic is recycled many times over without ending up in a landfill. The texture of the bioplastic mimics the slightly rough feel of pebbles, while the device curves at all angles mimicking the natural design of pebbles. The device’s colours feature a speckle pattern on top of soft hues that resemble the colours and patterns of naturally-found pebbles.

Picture Credit : Google 

HOW TO BE SMART WITH INTERNET OF THINGS (IOT)?

The Internet of Things, or lot as it is popularly known, is becoming a very important part of not only the technology industry, but also our daily lives. And you may be using lot without even knowing it!

What is IoT?

IoT is nothing but the billions of physical devices that are all connected to the Internet. These devices can then be controlled and can communicate information without any help from humans.

The IoT connects “dumb” devices like refrigerators. washing machines or a kettle, to the Internet using software and makes them “smart” loT devices. These loT devices can now collect and exchange data around the world and have some digital intelligence!

Chatting with each other

With loT, devices or machines can talk with each other, or to the people who are controlling them, by messaging over the Internet. This means that these devices can tell other devices, as well as people, if something is wrong with them or they are functioning well.

For example, in loT a car will become smart and can communicate and tell you that it needs petrol.

This has become possible as Wi-Fi networks are very common and devices can now have software to allow Internet access and make use of the Wi-Fi connection. The IoT requires sensors and software to collect data and communicate.

A personal computer or a laptop is not usually considered an loT device. Neither is a smartphone, despite it having sensors.

Aeroplane engine

Large machines, like an aeroplane engine, maybe be filled with numerous smaller loT components and devices, with thousands of them relaying data back and forth and sensors gathering information to make sure it is running efficiently.

loT is here to stay to make your life even easier!

Picture Credit : Google 

WHAT IS THE MOST POWERFUL SUPERCOMPUTER IN THE WORLD?

Frotiere is the first exascale supercomputer. This means that it is a computing system capable of at least one exaflop or a billion- billion calculations per second (1018)

We were heralded into a new era of computational capability in May 2022 as the U.S. retook the top spot in the race to build the world’s fastest supercomputer. Capable of billion billion operations per second, Frontier is the first exascale supercomputer. This means that it is a computing system capable of at least one exaflop or a billion billion calculations per second (1018).

Different level

The fastest supercomputers that were in existence before the Frontier came into being are still in the petascale, capable of quadrillion (1015) calculations per second. By reaching quintillion (1018) operations per second, Frontier has taken computing to a whole new level.

Built by the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) in Tennessee, the Frontier was able to demonstrate a processor speed of 1.102 exaflops in a benchmarking test called the High-Performance Linpack (HPL).

Faster than the fastest

Frontier took the place of being the world’s fastest supercomputer from Japanese supercomputer Fugaku. Fugaku had held the position for two years having scored 415.5 petaflops in the HPL benchmark. At that time, it was thrice as fast as the machine it had ousted – the Summit supercomputer built by IBM, also housed at ORNL.

The progress in this field has been rapid in the last few years as computer scientists worldwide had been working towards surmounting the exascale barrier. The exascale milestone represents a new level of computational power capable of calculating solutions to highly complex problems. Be it climate systems, new kinds of materials and medicines, or even some of the deepest questions of humankind, exascale devices such as the Frontier can efficiently process vast amounts of data.

This incredible machine, which was built at a cost of $600 million, is undoubtedly the most advanced computer currently on Earth. The unmatched capabilities of Frontier as a tool for scientific discovery will surely unlock new frontiers of knowledge.

Picture Credit : Google 

WHAT IS MORAVEC’S PARADOX?

Artificial intelligence can simplify complicated tasks but it may still be unable to do what humans do instinctively.

It is a concept in computing put forward by Austrian artificial intelligence (AI) researcher Hans Moravec in the 1980s. He theorised that while it is easy to make computers do highly intelligent tasks such as calculating complicated mathematical equations, it is very difficult to make them do simple tasks such as walking. According to Moravec, humans have evolved over millions of years. to perfect simple physical tasks such as walking and running. Such tasks, which we take for granted, are a result of the process of natural selection.

Moravec’s paradox states that it is difficult to build a machine that has the skills of a one-year-old child with the instinctive ability to move around, recognise faces, and avoid danger. It takes a lot of difficult computations to instruct a computer to do what a human being can do without thinking twice. On the other hand, humans acquired sophisticated skills such as abstract reasoning and logical thinking that result in excellence in the fields of engineering, mathematics and art, about hundred thousand years ago. It is easy to devise algorithms for these skills for computers. That is why it is easy to build a computer that can defeat a professional chess player or play music.

Moravec’s paradox can be interpreted in different ways. Some scholars believe that it means that Al can render people with high-level jobs such as stock analysis or engineering unemployed, while the jobs of cooks and gardeners are safe. Others take it to mean that Al will always need human supervision.

Picture Credit : Google

HOW DOES AN LED WORK?

LED stands for Light-emitting diode. It is a semiconductor device that emits light when an electric current flows through it. Unlike others lights, LEDS never dim with time and have an extended lifespan that can last a couple of years. They also do not contain poisonous gases like mercury that are commonly used to make the traditional lights. These energy-efficient bulbs are made up of glass and aluminum, which can be recovered by recycling and used to create other products.

The LED is a specialised form of PN junction that uses a compound junction. The semiconductor material used for the junction must be a compound semiconductor. The commonly used semiconductor materials including silicon and germanium are simple elements and junction made from these materials do not emit light. Instead compound semiconductors including gallium arsenide, gallium phosphide and indium phosphide are compound semiconductors and junctions made from these materials do emit light.

These compound semiconductors are classified by the valence bands their constituents occupy. For gallium arsenide, gallium has a valency of three and arsenic a valency of five and this is what is termed a group III-V semiconductor and there are a number of other semiconductors that fit this category. It is also possible to have semiconductors that are formed from group III-V materials.

The light emitting diode emits light when it is forward biased. When a voltage is applied across the junction to make it forward biased, current flows as in the case of any PN junction. Holes from the p-type region and electrons from the n-type region enter the junction and recombine like a normal diode to enable the current to flow. When this occurs energy is released, some of which is in the form of light photons.

It is found that the majority of the light is produced from the area of the junction nearer to the P-type region. As a result the design of the diodes is made such that this area is kept as close to the surface of the device as possible to ensure that the minimum amount of light is absorbed in the structure.

To produce light which can be seen the junction must be optimised and the correct materials must be chosen. Pure gallium arsenide releases energy in the infra read portion of the spectrum. To bring the light emission into the visible red end of the spectrum aluminium is added to the semiconductor to give aluminium gallium arsenide (AlGaAs). Phosphorus can also be added to give red light. For other colours other materials are used. For example gallium phoshide gives green light and aluminium indium gallium phosphide is used for yellow and orange light. Most LEDs are based on gallium semiconductors.

Credit : Electronics notes 

Picture Credit : Google 

WHAT IS 3D PRINTING? HOW DOES THE TECHNOLOGY WORK?

Your mother wants to make a duplicate set of the house keys. That would involve a visit to the local key-maker. Wouldn’t she be relieved if you could make her a set sitting right at home? Well, that day is not too far into the future and best of all, it’s not science fiction. 3D printing is here!

Additive manufacturing

3D printing is not really new. It is a type of additive manufacturing or AD, which itself means creating an object by adding material to it layer by layer. AD is also known as stereo-lithography, 3D layering and 3D printing 3D printing can be compared to stalactites and stalagmites in limestone caves or to coral reefs. Both are built by adding material layer by layer, bit by bit, until they form a solid structure. This natural process is very slow. In 3D printing, the design is precisely engineered with computer software. The computer directs the printer on how to add the layers.

Used in diverse fields

3D printing was earlier used to build prototypes or models of objects. Now there are a variety of printers that can create products in a vast number of fields. Already available in the market are 3D printers that can roll out anything from a precision-moulded car part, a designer chocolate and a customised toy to artificial limbs, dentures and even living human tissue for building organs!

First, a 3-D model is produced on computer using CAD or computer-aided design software. CAD can also tell you how the model will work when made with the kind of material you are using. In fact, the working can be seen using virtual simulation. The second step is converting the CAD model to a format that will work with the designated printer and then transferring it to the computer that controls the printer. Just as with a normal laser or inkjet printer, you can feed in the size and orientation (landscape. portrait, etc). Each kind of printer uses different materials (printing inks/toners) to build the object-cheese or chocolate for food items and liquid polymers or other chemical binders for making inedible objects like car or aeroplane parts or dental fixtures or even live cells to produce human tissue (bioprinting). The object is layered on a tray made of material that is water-soluble. Once the object is created, this support can be easily removed!

The machine may take hours or days to complete the object and it can take more time to cool, set or cure till it is fit to handle and be used.

It’s expensive, right now

3D printers are expensive right now, ranging from $30,000 to $80,000. However, as the technology evolves, the cost is expected to come down and you may eventually be able to print a set of keys at home!

Critics of 3D printing feel that the technology may be misused to print weapons. One nervous state in the U.S. has already passed a law banning 3D plastic and metal guns, and taken down a website that showed people how to make them!

Did you know?

A family in France became the first in the world to move into a 3D printed house in July 2018. The four-bedroom house took 54 hours to print, with an additional four months for contractors to add doors, windows and the roof! The design of the house was programmed into a 3D printer which worked by printing the walls in layers from the floor upwards. The cost of construction was 20 per cent less than that of a traditionally built house.

Picture Credit : Google 

HOW MUCH WASTE IS PRODUCED BY PHONES?

According to a report by Counterpoint, smartphones contribute to 12% of global e-waste. Smartphone production alone contributes 80-90% of carbon emissions by the device. Devices containing lithium-ion batteries (mostly smartphones) pose a significant risk to the environment.

A mobile phone has over 60 different metals, including rare earth metals that can contaminate soil and water if not disposed properly. Demand for mobile phones has increased mining activities for these metals, which adversely impact the environment at the extraction stage itself. They are listed as ‘endangered metals’, as they are available in limited quantities. Counterpoint estimated that about 6-7 kgs of  high-grade gold ore are mined to make a single mobile phone.

India, the U.S., the U.K., China and Japan, are the highest e-waste generating nations. During the 2020-2030 decade, 40% more e-waste will be generated; the corresponding global e-waste recycling rate will be only 20%.

Some countries have been working towards reducing e-waste. Japan’s first-of-its-kind initiative recycled e-waste to produce medals for the 2020 Tokyo Olympics. The U.K.’s Right to Repair legislation allows consumers to repair their electronic devices and requires manufacturers to supply the necessary parts. France’s Repairability Index mandates a clear display of information on the repairability of electronics to encourage consumers to choose repairable products.

Picture Credit : Google 

What is Xray day?

November 8 is X-ray Day. X-rays were discovered in 1895 by German physicist Wilhelm Conrad Röntgen, who received the first Nobel Prize for physics in 1901, yet never tried to patent his discovery.

The X-Ray was discovered by accident, as part of an experiment, where Wilhelm was attempting to ascertain whether or cathode rays could pass through glass. Nearby there was a chemically coated screen, and from it was emanating an odd glow, and dubbed the rays causing that glow X-Rays. Why you ask? Because he didn’t know what they were, so the ubiquitous ‘X for unknown’ was utilized. They’ve been called X-Rays ever since.

So what are x-rays really? They’re energy waves of electromagnetism that act in much the same way light rays do, but with an incredibly short wavelength. 1,000 times shorter than those of light to be precise. Once he discovered them, he began experimenting extensively with them, determining what they could and couldn’t pass through, and how they could be photographed. It was through this that he discovered that lead absorbed it almost completely, and human bone would stop it, creating a new and innovative way to see what was going on inside the human body.

X-Rays were used extensively during the Balkan War to locate shrapnel, bullets, and broken bones in soldiers in the field. X-Rays were used extensively in things like shoe-fittings until it became apparent that it wasn’t all fun and games. Now they’re used for things like security at airports, material analysis, and more, but with much more attention to safety.

The best way to celebrate World Radiography Day is to research X-Rays and what they’ve done for us. Then you can sit down and try to think of all the different ways that x-rays are used in modern living.

Credit :  Days of the year

Picture Credit : Google