General issues on Environmental ecology

The environment plays a significant role to support life on earth. But there are some issues that are causing damages to life and the ecosystem of the earth. It is related to the not only environment but with everyone that lives on the planet. Besides, its main source is pollution, global warming, greenhouse gas, and many others. The everyday activities of human are constantly degrading the quality of the environment which ultimately results in the loss of survival condition from the earth.There are hundreds of issue that causing damage to the environment. But in this, we are going to discuss the main causes of environmental issues because they are very dangerous to life and the ecosystem.

Pollution โ€“ It is one of the main causes of an environmental issue because it poisons the air, water, soil, and noise. As we know that in the past few decades the numbers of industries have rapidly increased. Moreover, these industries discharge their untreated waste into the water bodies, on soil, and in air. Most of these wastes contain harmful and poisonous materials that spread very easily because of the movement of water bodies and wind. Greenhouse Gases โ€“ These are the gases which are responsible for the increase in the temperature of the earth surface. This gases directly relates to air pollution because of the pollution produced by the vehicle and factories which contains a toxic chemical that harms the life and environment of earth. Climate Changes – Due to environmental issue the climate is changing rapidly and things like smog, acid rains are getting common. Also, the number of natural calamities is also increasing and almost every year there is flood, famine, drought, landslides, earthquakes, and many more calamities are increasing.

Development recognises that social, economic and environmental issues are interconnected, and that decisions must incorporate each of these aspects if there are to be good decisions in the longer term.For sustainable development, accurate environment forecasts and warnings with effective information on pollution which are essential for planning and for ensuring safe and environmentally sound socio-economic activities should be made known.


THE EARTH IS WHAT WE
ALL HAVE IN COMMAN

History of India & Indian National Movement.

Early times the Indian subcontinent appears to have provided an attractive habitat for human occupation. Toward the south it is effectively sheltered by wide expanses of ocean, which tended to isolate it culturally in ancient times, while to the north it is protected by the massive ranges of the Himalayas, which also sheltered it from the Arctic winds and the air currents of Central Asia. Only in the northwest and northeast is there easier access by land, and it was through those two sectors that most of the early contacts with the outside world took place.

Within the framework of hills and mountains represented by the Indo-Iranian borderlands on the west, the Indo-Myanmar borderlands in the east, and the Himalayas to the north, the subcontinent may in broadest terms be divided into two major divisions: in the north, the basins of the Indus and Ganges (Ganga) rivers (the Indo-Gangetic Plain) and, to the south, the block of Archean rocks that forms the Deccan plateau region. The expansive alluvial plain of the river basins provided the environment and focus for the rise of two great phases of city life: the civilization of the Indus valley, known as the Indus civilization, during the 3rd millennium BCE; and, during the 1st millennium BCE, that of the Ganges. To the south of this zone, and separating it from the peninsula proper, is a belt of hills and forests, running generally from west to east and to this day largely inhabited by tribal people. This belt has played mainly a negative role throughout Indian history in that it remained relatively thinly populated and did not form the focal point of any of the principal regional cultural developments of South Asia. However, it is traversed by various routes linking the more-attractive areas north and south of it. The Narmada (Narbada) River flows through this belt toward the west, mostly along the Vindhya Range, which has long been regarded as the symbolic boundary between northern and southern India.

India’s movement for Independence occurred in stages elicit by the inflexibility of the Britishers and in various instances, their violent responses to non-violent protests. It was understood that the British were controlling the resources of India and the lives of its people, and as far as this control was ended India could not be for Indians.

On 28 December 1885 Indian National Congress (INC) was founded on the premises of Gokuldas Tejpal Sanskrit School at Bombay. It was presided over by W.C Banerjee and attended by 72 delegates. A.O Hume played an instrumental role in the foundation of INC with an aim to provide Safety Valve to the British Government.
A.O Hume served as the first General Secretary of INC.
The real Aim of Congress is to train the Indian youth in political agitation and to organise or to create public opinion in the country. For this, they use the method of an annual session where they discuss the problem and passed the resolution.
The first or early phase of Indian Nationalism is also termed as Moderate Phase (1885-1905). Moderate leaders were W.C Banerjee, Gopal Krishna Gokhale, R.C Dutt, Ferozeshah Mehta, George Yule, etc.
Moderates have full faith in British Government and adopted the PPP path i.e. Protest, Prayer, and Petition.
Due to disillusionment from Moderates’ methods of work, extremism began to develop within the congress after 1892. The Extremist leaders were Lala Lajpat Rai, Bal Gangadhar Tilak, Bipin Chandra Pal, and Aurobindo Ghosh. Instead of the PPP path, they emphasise on self-reliance, constructive work, and swadeshi.
With the announcement of the Partition of Bengal (1905) by Lord Curzon for administrative convenience, Swadeshi and Boycott resolution was passed in 1905.


ONE INDIVIDUAL MAY DIE; BUT THAT IDEA WILL, AFTER HIS DEATH, INCARNATE ITSELF IN A THOUSAND LIVES.

-Netaji Subhash Chandra Bose

Internet Protocol

What is an IP address?

An IP address abbreviation of Internet Protocol address, it is an address that is provided by the Internet Service Provider to the user, it is just like a postal address code that is pin code to find the location or place where to send the message.  An IP address is a unique group of number what are separated by the period (.), it varies from 0 to 255, and   every device has a separate and unique IP address that is assigned by the specific Internet Service Provider (ISP) to identify which particular device is communicating with them and accessing the internet from there.

If you want to access internet from you device which may be your Android, I phone, Computer the service provider assigned them a particular, unique  address  that is help them to communicate send, receive information from the right person without any misunderstanding, mistake the message is pass to the authentic person to whom it has to send.  This problem is solved by the IP address, in olden days; we have postal address to send the message/letter to the person, the message that has to be sent with the help of the address which may be his house number, city, town, postal code.  The sender will write the address on the top of the letter envelope so that it will be delivery to the right person.  If the person connected his device to internet provide by the hotel, the hotelโ€˜s Internet Service Provider will assign an IP address to the device.

Types of IP addresses

There are different types of IP based on different categories, types.

Consumer IP addresses

A Consumer IP addresses is the individual IP addresses of a customer who connects his/her  device to a public or private  network.  A consumer connects his device through internet from his Internet Service Provider, or from the Wi-Fi.  In these days the Consumer has many electronic gadgets which he connects to his router that transfer the data from the Internet Service Provider.

Private IP addresses

A  Private IP addresses are a secure one that is connected Private Network and every devices that is connected to this Private Network is assigned a unique IP address that is assigned by the Internet Service Provider.  All Mobile devices, Computer, and Internet of Things that are connected to this private network are assigned a unique string number to the devices.

Public IP addresses

A Public IP addresses is the main address that is related to your network, as stated above that the IP address are assigned by the Internet Service Provider, the Public IP address is also assigned by the Internet Service Provider, The Internet Service Provider has a large amount of IP addresses that are stored and assigned to the customer. The public IP address is the address that  devices that are outside the network use to identify the network.

The Public IP addresses are further classified into two types they are:

  1. Dynamic
  2. Static

Dynamic IP addresses

                The Dynamic  IP address  are the IP address that changes very frequently, so the Internet  Service Providers  purchase a very huge amount of IP addresses , they assign it mechanically to the customer . This frequently changing the IP address helps the customer not to make the security actions. The frequently changing IP address wonโ€™t let the hacks to track or pool your data.

 Static IP addresses

The Static IP addresses is the contradictory to the Dynamic IP address, it remain fixed. The IP address remains fixed when it is assigned by the Internet Service Provider.  The mostly many person and business man donโ€™t   choose static because it is risk of getting easily track, but most business which are trying host her own website server choose Static IP address so it will easier  for the customer to find them.

                The IP address can be protect by 2 ways that are using proxy and the other one is use of Virtual Private Network.   A proxy server acts as a intermediary between the internet server and your internet service providers, when you visit any website it will show the proxy IP address not yours. 

Where to find IP address is Device?

                The IP address set up in every device that is connected to the Internet, but the steps or direction is different in different devices. Some of device direction is given below:

In Window or any other Personal Computer

  1. Go to the Start Menu
  2. Type  โ€˜Runโ€™ in the Search bar
  3. A Run Tab pops up
  4. Type  โ€˜cmdโ€™
  5. A black screen pops up
  6. Type โ€˜ipconfigโ€™
  7. Your  IP address is found.

In Android Mobile

  1. Go to the Settings
  2. Tap on Network and Internet
  3. Tap on Wi-Fi, it will show the IP address

Web 3.0

Previous versions of Internet Era

Web 1.0: The first version of the web was started with the development of the web browser in 1991. It consisted of static websites with content written by a few people and organizations. Other people can only read the content, they cannot comment or provide new information, so it is just one-way communication. It worked very well but had one big problem there was no way to make money off it. For instance, a Web 1.0 startup called Google had heavy traffic, but couldn’t encash it.

Web 2.0: The next version of the web, which is web 2.0 was started approximately from 2004. It allowed consumers to add content through comments, blogs etc. People began creating a lot of content on social media websites as well. So, people can read and write on this version of the web, which allowed two-way communication.

What is Web 3.0 ?

Any innovation starts with a vision. So, many people had different version on how the next version of the web should be. The majority of them wanted a web that ensured data privacy and free speech. The invention of Blockchain technology, which enables peer-to-peer online payment transfers without the interference of banks, gave hopes of creating the decentralized web, where user privacy and free speech are guaranteed. The latest technologies, such as blockchain technology, artificial intelligence, and the Internet of Things, are being used to create Web 3.0.

Web 3.0 is defined as a decentralized web, where content does not lie in the hands of big corporations. Instead, it uses peer-to-peer infrastructure, so the information cannot be censored by corporations or the government. So, it can ensure free speech.

However, the reality may or may not match the vision. It may change somewhat from the vision or take a whole different direction.

The vision of web 3.0

  • Web 3.0 will most likely be a decentralized internet. Now there are already so many Decentralized applications (dApps), which are based on blockchain technology to give more control to users over their data and finances.
  • As the data is not controlled by big companies, user privacy will be guaranteed.
  • The accuracy of the information may also be improved by making Artificial intelligence learn to distinguish between good and bad data. AI is already being used to accomplish this goal. Google, for example, uses Artificial Intelligence to delete millions of fake reviews.

  • Web 3.0 allows 3D graphics in apps. Big tech companies have already begun to invest metaverse โ€“ virtual environments. Some of the most popular metaverses include Decentraland, Sandbox, and CryptoVoxels. Metaverses are made possible with the help of Virtual Reality (VR) and Augmented Reality (AR) technologies. We may use our digital avatars to interact, shop, and play games in the virtual world. There, we can use cryptocurrencies for financial transactions.
  • Web 3.0 is already being included by several websites and apps. According to some experts, web 3.0 will not be able to totally replace web 2.0 in the near future. Instead, both will run simultaneously.

Challenges with Web 3.0

  • Vastness: The internet is huge and it contains billions of pages and the SNOMED CT medical terminology ontology alone includes 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms.
  • Vagueness: User queries are not really specific and can be extremely vague at the best of times. Fuzzy logic is used to deal with vagueness.
  • Uncertainty: The internet deals with scores of uncertain values. For example, a patient might present a set of symptoms that correspond to many different distinct diagnoses each with a different probability. Probabilistic reasoning techniques are generally employed to address uncertainty.
  • Inconsistency: Inconsistent data can lead to logical contradiction and unpredictive analysis.

Conclusion

Web 3.0 is the next step in the internet’s evolution, and its foundations have already been set. According to current standards, web 3.0 will be a huge advance in network technology since it will be a hyper-intelligent network capable of understanding information like a human. Aside from the technological marvels proposed by web 3.0, it also proposes the application of certain ideas that will drastically alter existing mode of operation of today’s networks.  And we as the end-users will usher into a new era of networking, one that will further blur the lines between the physical and the digital space.

Metaverse

What is Metaverse?

In metaverse, people can interact with each other using virtual and augmented reality technologies. It will result in forming the shared virtual world.

Metaverse is considered web 3.0. The earlier version of the internet which consisted of web pages that provided information is termed web 1.0. The next version consisted of interactive web pages. Now, web 3.0 will be a result of assimilating virtual reality and augmented reality in web 2.0.

We can shop, play games, buy things and own places in the metaverse. Several companies are creating gaming metaverses. The game ‘Second Life,’ which was released in 2003, can be considered an early version of the metaverse.

Benefits of Metaverse

  • It will be quite beneficial to host meetings. Video conferencing has some drawbacks, such as the lack of a personal connection. Metaverse will make us feel like we are sitting at the same place by interacting through digital avatars.
  • It will also help people with special needs.
  • It can also be used to help people overcome phobias.
  • It is expected that virtual currencies in the metaverse will significantly influence the world economy. Decentralization will reduce the dependence on governments.

Challenges with Metaverse

  • Few companies may control the metaverse and hence power and influence may stay in the hands of a few people.
  • Government surveillance and control could be increased by collaborating with businesses.
  • Addictions to the internet and smartphones are becoming common. As a result, it’s possible that virtual world addiction will become the next huge concern. Furthermore, metaverse consists of entertainment, shopping, games and many other things that are addictive in nature.
  • Even in this modern era, not everyone has access to the internet. Many people are digital illiterates. Due to the digital divide, the benefits of metaverse will not be accessible to many.

Conclusion

Some people believe that the metaverse will be the internet’s future. Many businesses are investing in the development of the metaverse. It is important to ensure that no monopoly exists in the shared virtual environment.

New NASA Earth System Observatory to Help Address, Mitigate Climate Change

May 24, 2021

NASA will design a new set of Earth-focused missions to provide key information to guide efforts related to climate change, disaster mitigation, fighting forest fires, and improving real-time agricultural processes. With the Earth System Observatory, each satellite will be uniquely designed to complement the others, working in tandem to create a 3D, holistic view of Earth, from bedrock to atmosphere.



โ€œIโ€™ve seen firsthand the impact of hurricanes made more intense and destructive by climate change, like Maria and Irma. The Biden-Harris Administrationโ€™s response to climate change matches the magnitude of the threat: a whole of government, all hands-on-deck approach to meet this moment,โ€ said NASA Administrator Sen. Bill Nelson. โ€œOver the past three decades, much of what weโ€™ve learned about the Earthโ€™s changing climate is built on NASA satellite observations and research. NASAโ€™s new Earth System Observatory will expand that work, providing the world with an unprecedented understanding of our Earthโ€™s climate system, arming us with next-generation data critical to mitigating climate change, and protecting our communities in the face of natural disasters

Technological Determinism

Technological determinism is a reductionist theory that aims to provide a causative link between technology and a societyโ€™s nature. It tries to explain as to whom or what could have a controlling power in human affairs. The theory questions the degree to which human thought or action is influenced by technological factors.




The term โ€˜technological determinismโ€™ was coined by Thorstein Veblen and this theory revolves around the proposition that technology in any given society defines its nature. Technology is viewed as the driving force of culture in a society and it determines its course of history.

Karl Marx believed that technological progress lead to newer ways of production in a society and this ultimately influenced the cultural, political and economic aspects of a society, thereby inevitably changing society itself. He explained this statement with the example of how a feudal society that used a hand mill slowly changed into an industrial capitalist society with the introduction of the steam mill.

WINNERโ€™S HYPOTHESES

Langdon Winner provided two hypotheses for this theory:

The technology of a given society is a fundamental influencer of the various ways in which a society exists
Changes in technology are the primary and most important source that leads to change in the society
An offshoot of the above hypotheses which is not as extreme is the belief that technology influences the various choices that we make and therefore a changed society can be traced back to changed technologies.

Technological determinism manifests itself at various levels initially it starts with the introduction of newer technologies introduces various changes and at times these changes can also lead to a loss of existing knowledge as well. For example, the introduction of newer agricultural tools and methods has seen the gradual loss of knowledge of traditional means of farming. Therefore technology is also influencing the level of knowledge in a society.

Examples of Technological determinism

History shows us numerous examples to explain why technology is considered to be determining the society that we live in. The invention of the gun changed how disputes were sorted out and changed the face of combat. A gun required minimum effort and skill to be used successfully and could be used from a safe distance. This when compared to how earlier wars were fought with swords and archery lead to a radical change in the weapons used in war. Today with the discovery .

Today with the discovery of nuclear energy, future wars will be fought with nuclear arsenal. Each new discovery causes a transition to a different society. The discovery of steam power let to the development of the industrial society and the introduction of computers has led to the dawn of the information age.

Technological Drift

Winner believed that changes in technology sometimes had unintended or unexpected results and effects as well. Winner called this phenomenon as โ€˜technological driftโ€™ where people start drifting more and more among a sea of unpredictable and uncertain consequences. According to Winner, technology is not the slave of the human being but rather humans are slaves to technology as they are forced to adapt to the technological environment that surrounds them.

Forms of Technological Determinism

An alternative weaker view of technological determinism says that technology is serving a mediating function because despite it leading to changes in culture, it is actually controlled by human beings. When control of technology slowly reduces from being in the hands of few human beings, it passes completely into the control of technology itself. This view of humans having no control is referred to as โ€˜autonomous technological determinism.โ€™

Technological Determinism and Media

New media are not only an addition to existing media, they are also new technologies and therefore do have a deterministic factor as well. Marshall McLuhan made a famous statement that โ€œthe medium is the message.โ€ This means that the medium used to communicate influences the mind of the receiver. The introduction of news print, television and the internet have all shown how technological advances have an impact on the society in which we live in.

Criticism of Technological Determinism

A critique of technological determinism is that technology never forces itself on members of the society. Man creates technology and chooses to use them. He invents television and chooses to view it. There is no imposition on the part of the technology to be used rather technology requires people to participate or involve themselves at some point or another to use a car or a microwave. The choice of using technology and experiencing its effects therefore lies in the hand of a human being.

Written by: Ananya Kaushal

The incredible journey of Elon Musk’s SpaceX – The engineering masterpiece

The Falcon super heavy launch vehicle was designed to transport people, spaceships, and various cargos into space. Such a powerful unit wasnโ€™t created instantly and it also had its predecessors. The history of the Falcon family of vehicles began with the creation of the Falcon 1- a lightweight launch vehicle with a length of 21.3 meters and a diameter of 1.7 meters and a launch mass of 27.6 tones; the rocket could carry 420 kilograms or 926 pounds of payload on board. It became the first private device that was able to bring cargo into low earth orbit. Construction of the Falcon 1 of only two stages, the first of them consisted of a supporting element with fuel tanks, an engine and a parachute system. They chose kerosene as the fuel and liquid oxygen became its oxidizing agent.

The falcon heavy side boosters landings -SpaceX

The second stage also contains fuel tanks and an engine; though the latter had less thrust compared to the one in the first stage despite the huge launch cost $7.9 million. Totally five attempts were made to send the Falcon 1 beyond the of our planet nut not all of them were successful. During the debut launch of the rocket, a fire started in the first stage engine; this led to a loss of pressure which caused the engine to shut down in the 34th second of flight. The second attempt to start the Falcon 1 incurred a problem with the fuel system of the second stage fuels stopped flowing into its engine at 474 second of flight it shut down as well. The third time of the Falcon 1 went on a flight, it wasnโ€™t alone of the serious cargo the rocket carried onboard the trailblazer satellites and to NASA micro-satellites. In phase one with the first stage he flight went normally but when the time came to separate the stages, the first hit the second when it started engine, so the second stage couldnโ€™t continue its flight.

 The fourth and fifth launches showed good results but that wasnโ€™t enough. The main problem with Falcon 1 was low demand due to its low payload abilities. For this reason, they designed Falcon 9; this device can carry on onboard 23 tons of cargo. Itโ€™s also a two stage launch vehicle and uses kerosene and l liquid oxygen as fuel. The device is currently in operation and the cost of its launch is equal to $62 million. The first stage of the rocket is reusable; it can return to earth and can be used again. The Falcon 9 is designed to not only launch commercial communication satellites but also to deliver dragon 1 to the ISS. Dragon 1 can carry a six ton payload from the earth, this drone supplies the ISS with everything they needs and it also takes goods back.

The prototype of SpaceX starship had its first free flight on July 25, 2019

The dragon 2 is designed to deliver a crew of four people to the ISS and back to earth. Now there is an ultra heavy launch vehicle with a payload capacity of almost 64 tones. It is the most powerful and heavier device called the Falcon heavy. This rocket was first launched on February 6th 2018 and the test was successful. The rocket sent Elon Muskโ€™s car into space- a red Tesla Roadster. After this debut subsequent launches were also conducted without problem. The launch cost is estimated to $150 million.

The first stage of the Falcon heavy consists f three parts. There are three blocks contain 27 incredibly powerful engines in nine each one. The thrust created when takeoff is comparable to 18 Boeing 747s at full power. The second stage is equipped with a single engine. It is planned that the device would be used for missions to the moon and mars. Currently, SpaceX working on the star-ship manned spacecraft.  According to its creators, this device will be much larger and heavier than all of the companyโ€™s existing rockets. This device will able to deliver cargo into space weighing more than a hundred tons. The launch of star-ship into pace is planned for 2022 to mars with a payload. Who knows, one of the mankindโ€™s largest dreams may come true within the next year.

“When something is important enough, you do it even if the odds are not in your favor.”Elon Musk

Stephen Hawking’s final theory of black holes -The Hawking radiation

When a massive star dies, it leaves a small but dense remnant core in its wake. If the mass of the core is more than 3 times the mass of the sun, the force of gravity overwhelms all other forces and a black hole is formed. Imagine the size of a star is 10 times more massive than our sun being squeezed into a sphere with a diameter equal to the size of New York City. The result is a celestial object whose gravitational field is so strong that nothing, not even light can escape it. The history of black holes was started with the father of all physics, Isaac Newton. In 1687, Newton gave the first description of gravity in his publication, Principia mathematica, that would change the world.

Then 100 years later, John Michelle proposed the idea that there could exist a structure that would be massive enough and not even light would be able to escape its gravitational pull. In 1796, the famous French scientist Pierre-Simon Laplace made an important prediction about the nature of black holes. He suggested that because even the speed of light was slower than the escape velocity of black hole, the massive objects would be invisible. In 1915, Albert Einstein changed physics forever by publishing his theory of general relativity. In this theory, he explained space time curvature and gave a mathematical description of a black hole. And in 1964, john wheeler gave these objects the name, the black hole.

The Gargantua in Interstellar is an incredibly close representation of an actual black hole

In classical physics, the mass of a black hole cannot decrease; it can either stay the same or get larger, because nothing can escape a black hole. If mass and energy are added to a black hole, then its radius and surface area also should get bigger. For a black hole, the radius is called the Schwarzschild radius. The second law of thermodynamics states that, an entropy of a closed system is always increases or remains the same. In 1974, Stephen hawking– an English theoretical physicists and cosmologist, proposed a groundbreaking theory regarding a special kind of radiation, which later became known as hawking radiation. So hawking postulated an analogous theorem for black holes called the second law of black hole mechanics that in any natural process, the surface area of the event horizon of a black hole always increase, or remains constant. It never decreases. In thermodynamics, black bodies doesnโ€™t transmit or reflect any radiation, it only absorbs radiation.

When Stephen hawking saw these ideas, he found the idea of shining black holes to be preposterous.  But when he applied the laws of quantum mechanics to general relativity, he found the opposite to be true. He realized that stuff can come out near the event horizon. In 1974, he published a paper where outlined a mechanism for this shine. This is based on the Heisenberg uncertainty Principe. According to the principle of quantum mechanisms, for every particle throughout the universe, there exists an antiparticle. These particles always exist in pairs, and continually pop in and out of existence everywhere in the universe. Typically, these particles donโ€™t last long because as soon as possible and its antiparticle pop into existence, they annihilate each other and cease to exist almost immediately after their creation.

In the event horizon that the point which nothing can escape its gravity. If a virtual particle pair blip into existence very close to the event horizon of a black hole, one of the particles could fall into the black hole while the other escapes. The one that falls into the black hole effectively has negative energy, which is, in Laymanโ€™s terms, akin to subtracting energy from the black hole, or taking mass away from the black hole. The other particle of the pair that escapes the black hole has positive energy, and is referred to as hawking radiation.

The first-ever image of a black hole by the Event Horizon Telescope (EHT), 2019

Due to the presence of hawking radiation, a black hole continues to loss mass and continues shrinking until the point where it loses all its mass and evaporates. It is not clearly established what an evaporating black hole would actually look like. The hawking radiation itself would contain highly energetic particles, antiparticles and gamma rays. Such radiation is invisible to the naked eye, so an evaporating black hole might not look like anything at all. It also possible that hawking radiation might power a hadronic fireball, which could degrade the radiation into gamma rays and particles of less extreme energy, which would make an evaporating black hoe visible. Scientists and cosmologists still donโ€™t completely understand how quantum mechanics explains gravity, but hawking radiation continues to inspire research and provide clues into the nature of gravity and how it relates to other forces of nature.

CYBER CRIME CASE STUDY IN INDIA

Computer Crime Cyber crime encompasses any criminal act dealing with computers and networks (called hacking).Additionally, cyber crime also includes traditional crimes conducted through the internet. For example; The computer may be used as a tool in the following kinds of activity- financial crimes, sale of illegal articles, pornography, online gambling, intellectual property crime, e-mail spoofing, forgery, cyber defamation, cyber stalking.The computer may however be target for unlawful acts in the following cases- unauthorized access to computer/ computer system/ computer networks, theft of information contained in the electronic form, e-mail bombing, Trojan attacks, internet time thefts, theft of computer system, physically damaging the computer system

Cyber Law is the law governing cyberspace. Cyberspace is a wide term and includes computers, networks,software, data storage devices (such as hard disks, USB disks), the Internet, websites, emails and even electronic devices such as cell phones, ATM machines etc.

Computer crimes encompass a broad range of potentially illegal activities. Generally, however, it may be divided into one of two types of categories

(1) Crimes that target computer networks or devices directly; Examples – Malware and malicious code, Denial-of-service attacks and Computing viruses.

(2) Crimes facilitated by computer networks or devices, the primary target of which is independent of the computer network or device. Examples – Cyber stalking, Fraud and identity theft, Phishing scams and Information warfare.

CASE STUDIES

Case no:1 Hosting Obscene Profiles (Tamil Nadu)

The case is about the hosting obscene profiles. This case has solved by the investigation team in Tamil Nadu. The complainant was a girl and the suspect was her college mate. In this case the suspect will create some fake profile of the complainant and put in some dating website. He did this as a revenge for not accepting his marriage proposal. So this is the background of the case.

Investigation Process

Letโ€™s get into the investigation process. As per the complaint of the girls the investigators started investigation and analyze the webpage where her profile and details. And they log in to that fake profile by determining its credentials, and they find out from where these profiles were created by using access log. They identified 2 IP addresses, and also identified the ISP. From that ISP detail they determine that those details are uploaded from a cafรฉ. So the investigators went to that cafรฉ and from the register and determine suspect name. Then he got arrested and examining his SIM the investigators found number of the complainant.

Conclusion

The suspect was convicted of the crime, and he sentenced to two years of imprisonment as well as fine.

Case no:2 Illegal money transfer (Maharashtra)

ThIS case is about an illegal money transfer. This case is happened in Maharashtra. The accused in this case is a person who is worked in a BPO. He is handling the business of a multinational bank. So, he had used some confidential information of the banks customers and transferred huge sum of money from the accounts.

Investigation Process

Letโ€™s see the investigation process of the case. As per the complaint received from the frim they analysed and studied the systems of the firm to determine the source of data theft. During the investigation the system server logs of BPO were collected, and they find that the illegal transfer were made by tracing the IP address to the internet service provider and it is ultimately through cyber cafรฉ and they also found that they made illegal transfer by using swift codes. Almost has been  The registers made in cyber cafรฉ assisted in identifying the accused in the case. Almost 17 accused were arrested.

Conclusion

Trail for this case is not completed, its pending trial in the court.

Case no:3 Creating Fake Profile (Andhra Pradesh)

The next case is of creating fake profile. This case is happened in Andhra Pradesh. The complainant received obscene email from unknown email IDs. The suspect also noticed that obscene profiles and pictures are posted in matrimonial sites.

Investigation Process

The investigators collect the original email of the suspect and determine its IP address. From the IP address he could confirm the internet service provider, and its leads the investigating officer to the accused house. Then they search the accused house and seized a desktop computer and a handicam. By analysing and examining the desktop computer and handicam they find the obscene email and they find an identical copy of the uploaded photos from the handicam. The accused was the divorced husband of the suspect.

Conclusion

Based on the evidence collected from the handicam and desktop computer charge sheet has been filed against accused and case is currently pending trial.

Hacking is a widespread crime nowadays due to the rapid development of the computer technologies. In order to protect from hacking there are numerous brand new technologies which are updated every day, but very often it is difficult to stand the hackerโ€™s attack effectively.ย With some of these case studies, one is expected to learn about the cause and effect of hacking and then evaluate the whole impact of the hacker on the individual or the organization.

The history of surgery and its advancements today

Treating illness by using tools to remove or manipulate parts of the human body is an old idea. Even the minor operations carried high risks, but that doesnโ€™t mean all early surgery failed. Indian doctors, at the beginning centuries before the birth of Christ, successfully removed tumors and performed amputations and other operations. They developed dozens of metal tools, relied on alcohol to dull the patient, and controlled bleeding with hot oil and tar. The 20th century brought even more radical change through technology. Advances in fiber optic technology and the miniaturization of video equipment have revolutionized surgery. The laparoscopy is the James Bond like gadget of the surgeonโ€™s repertoire of instruments. Only a small incision through the patientโ€™s abdominal wall is made into which the surgeon puffs carbon dioxide to open up the passage.

 Using a laparoscope, a visual assessment and diagnosis, and even surgery causes less physiological damage, reduces patientโ€™s pain and speeds their recovery leading to shorter hospital stays. In the early 1900s, Germanyโ€™s George Kelling developed a surgical technique in which he injected air into the abdominal cavity and inserted a cytoscope โ€“ a tube like viewing scope to assess the patientโ€™s innards. In late 1901, he began experimenting and successfully peered into a dogโ€™s abdominal cavity using the technique. Without cameras, laparoscopyโ€™s use limited to diagnostic procedures carried out by gynecologists and gastroenterologists.

By the 1980s, improvements in miniature video devices and fiber optics inspired surgeons to embrace minimally invasive surgery. In 1996, the first live broadcast of a laparoscopy took place. A year later, Dr. J. Himpens used a computer controlled robotic system to aid in laparoscopy. This type of surgery is now used for gallbladder removal as well as for the diagnosis and surgeries of fertility disorder, cancer, and hernias.

Hypothermia is a drop in body temperature significantly below normal can be life threatening, as in the case of overexposure to severe wintry conditions. But in some cases, like that of Kevin Everett of the buffalo bills, hypothermia can be lifesaver. Everett fell to the ground with a potentially crippling spinal cord injury during a 2007 football game. Doctors treating him on the field immediately injected his body with a cooling fluid. At the hospital, they inserted a cooling catheter to lower his body temperature by roughly five degrees, at the same time proceeding with surgery to fix his fractured spine. Despite fears that he would be paralyzed, Everett has regained his ability to walk, and advocates of therapeutic hypothermia feel his lowered body temperature may have made the difference.

Robotic surgery allows surgeonsย to perform complex rectal cancer surgery

Therapeutic hypothermia is still a controversial procedure. The side effects of excessive cooling include heart problems, blood clotting, and increased infection risk. On the other hand, supporters claim, it slows down cell damage, swelling, and other destructive processes well enough that it can mean successful surgery after a catastrophic injury. Surgical lasers can generate heat up to 10,000ยฐF on a pinhead size spot, sealing blood vessels and sterilizing. Surgical robots and virtual computer technology are changing medical practice. Robotic surgical tools increase precision. In 1998, heart surgeons at Parisโ€™s Broussais hospital performed the first robotic surgery. New technology allows an enhanced views and precise control of instruments.

โ€œAfter a complex laparoscopic operation, the 65-year-old patient was home in time for dinnerโ€. โ€“ Elisa Birnbaum, surgeon

Can we fix our Ozone layer? The Montreal protocol

Imagine that one day our Ozone layer was disappeared. What will happen? How long can we survive without it?  The Ozone layer is a region of Earthโ€™s atmosphere that contains a high concentration of Ozone (O3). Ozone is a highly reactive gas composed of three oxygen atoms. It is found in the lower portion of Earthโ€™s atmosphere. It absorbs 97 to 99 percent of the Sunโ€™s ultraviolet rays. Direct exposure to UV rays can cause serious skin problems including sun burn, skin cancer, premature ageing of the skin, solar elastosis. It can also cause eye problems and can ruin our immune system.

ย ย The depletion of ozone layer was first observed by a Dutch chemist Paul crutzen. He described the Ozone depletion by demonstrating the reaction of nitrogen oxide with oxygen atoms which slowing the creation of Ozone (O3). Later in 1974, American chemists Mario Molina and F. Sherwood Rowland observed that chlorofluorocarbon (CFC) molecules emitted by man-made machines like refrigerators, air conditioners and airplanes could be the major source of chlorine in the atmosphere. One chlorine atom can destroy 100,000 ozone molecules.

Not all chlorine molecules contribute to ozone layer depletion; chlorine from swimming pool, sea salt, industrial plants, and volcanoes does not reach the stratosphere. The ozone hole in Antarctica is one of the largest and deepest depletion which was discovered by the British scientists. This became worldwide headlines after that. According to NASA scientist Paul Newman, if this depletion continues in this rate our ozone layer can be likely disappeared in 2065. If that happens UV rays from sun directly reach earth and cause severe health issues, Humans can last 3 months and plants may die in 2 weeks because of heavy UV radiation. Thus Earth will become inhabitable.

ย Fortunately in 1987, Montreal protocol was made that bans chlorofluorocarbon and other chemicals that cause ozone depletion. Surprisingly it works, researches made in 2018 tells that the ozone layer is repairing itself at a rate of 1% to 3% per decade since 2000. Still it will take at least 50 years for complete recovery. The greenhouse effect allows the short wave radiation of sunlight to pass through the atmosphere to earthโ€™s surface but makes it difficult for heat in the form of long wave radiation to escape. This effect blankets the earth and keeps our planet at a reasonable temperature to support life. Earth radiated energy, of which about 90 percent is absorbed by atmospheric gases like water vapor, carbon dioxide, ozone, methane, nitrous oxide, and others. Absorbed energy is radiated back to the surface and warms earthโ€™s lower atmosphere.

The gases have come to be called greenhouse gases because they hold in light and heat, just as a greenhouse does for the sake of the plants inside. Greenhouse gases are essential to life, not only at an appropriate balance point. These gases increased during the 20th century due to industrial activity and fossil fuel emissions. For example, the concentration of carbon dioxide I the atmosphere have recently been growing by about 1.4 percent annually. This increase in greenhouse gases is one of the contributors to be observed patterns of global warming. On September 16th world ozone day, we can celebrate our success. But we must all push to keep hold of these gains, in particular by remaining vigilant and tackling any illegal sources of ozone depleting substances as they arise, says UN ozone-secretariat. So without the Montreal protocol, life on earth could be a question mark, so keep working hard. โ€œOZONE FOR LIFEโ€.

The beginning of Art: Visual arts history

Expressing oneself through art seems a universal human impulse, while the style of that expression is one of the distinguishing marks of a culture. As difficult as it to define, art typically involves a skilled, imaginative creator, whose creation is pleasing to the senses and often symbolically significant or useful. Art can be verbal, as in poetry, storytelling or literature or can take the form of music and dance. The oldest stories, passed down orally may be lost to us now, but thanks to writing, tales such as the epic of Gilgamesh or the Lliad entered the record and still hold meaning today. Visual art dates back 30,000 years, when Paleolithic humans decorated themselves with beads and shells. Then as now, skilled artisans often mixed aesthetic effect with symbolic meaning.

A masterpiece of Johannes Vermeer, 1665 –โ€œGirl with a Pearl Earringโ€.

In an existence that centered on hunting, ancient Australians carved animal and bird tracks into their rocks. Early cave artists in Lascaux, France, painted or engraved more than 2,000 real and mythical animals. Ancient Africans created stirring masks, highly stylized depictions of animals and spirits that allow the wearer to embody the spiritual power of those beings. Even when creating tools or kitchen items, people seem unable to resist decorating or shaping them for beauty. Ancient hunters carved the ivory handles of their knives. Ming dynasty ceramists embellished plates with graceful dragons. Modern pueblo Indians incorporates traditional motifs in to their carved and painted pots. The western fine arts tradition values beauty and message. Once heavily influenced by Christianity and classical mythology, painting and sculptures has more recently moved toward personal expression and abstraction.

Humans have probably been molding clay- one of the most widely available materials in the world- since the earliest times. The era of ceramics began, however, only after the discovery of that very high heat renders clay hard enough to be impervious to water. As societies grew more complex and settled, the need for ways to store water, food, and other commodities increased. In Japan, the Jomon people were making ceramics as early as 11,000 B.C. by about the seventh millennium B.C.; kilns were in use in the Middle East and china, achieving temperatures above 1832ยฐF. Mesopotamians were the first to develop true glazes, through the art of glazing arguably reached its highest expression in the celadon and three color glazes of the medieval china. In the new world, although potters never reached the heights of technology seen elsewhere, Moche, Maya, Aztec, and Puebloan artists created a diversity of expressive figurines and glazed vessels.

The prehistoric cave paintings of El Castillo, Spain were almost 40,800 years oldย 

When Spanish nobleman Marcelino Sanz de Sautuola described the paintings he discovered in a cave in Altamira, contemporizes declared the whole thing a modern fraud. Subsequent finds confirmed the validity of his claims and proved that Paleolithic people were skilled artists. Early artists used stone tools to engrave shapes into walls. They used pigments from hematite, manganese dioxide, and evergreens to achieve red, yelled, brown, and black colors. Brushes were made from feathers, leaves, and animal hair. Artists also used blowpipes to spray paint around hands and stencils.

“History is remembered by its art, not its war machines”James Rosenquist

The origin of Glass- Why they are transparent?

Archaeological findings suggest that glass was first created during the Bronze Age in the Middle East. To the southeast, in Egypt, glass beads have seen found dating back to about 2500 B.C.E. Glass is made from a mixture of silica sand, calcium oxide, soda, and magnesium, which is melted in a furnace at 2,730ยฐF (1,500ยฐC). Most early furnaces produced insufficient heat to melt the glass properly, so glass was a luxury item that few people could afford. This situation changed in the first century B.C.E. when the blowpipe was discovered. Glass manufacturing spread throughout the Roman Empire in such quantities that glass was no longer a luxury. It flourished in Venice in the fifteenth century, where soda lime glass, known as โ€˜cristalloโ€™, was developed. Venetian glass objects were said to be the most delicate and graceful in the world.

How glass was made?

It all begins in the earthโ€™s crust, where the two most common elements are silicon and oxygen. These react together to form silicon dioxide, whose molecules arrange themselves into a regular crystalline form known as quartz. Quartz is commonly found in sand, where it often makes up most of the grains and is the main ingredient in most types of glass. You probably noticed that glass isnโ€™t made of multiple tiny bits of quartz and for good reason. The edges of the rigidly formed grains and smaller defects within the crystal structure reflect and disperse light that hits them. But when the quartz is heated high enough, the extra energy makes the molecules vibrate until they break the bonds holding them together and become a flowing liquid, the same way that ice melts into water. Unlike water, though, liquid silicon dioxide does not reform into a crystal solid when it cools. Instead, as the molecules lose energy, they are less and less able to move into an ordered position, and the result is what is called an amorphous solid. A solid material with the chaotic structure of a liquid, which allows the molecules to freely fill in any gaps, this makes the surface of lass uniform on a microscopic level, allowing light to strike it without being scattered in different directions.

How glass is transparent?

Ancient glass materials found in Rome.

Why light is able to pass through glass rather than being absorbed as with most solids? You may know that an atom consists of a nucleus with electrons orbiting around it, but you may not know that an atom has a lot of empty space. So, light passes through these atoms easily without hitting any of these particles. Then why arenโ€™t all materials transparent? This is because, the different energy levels those electrons in an atom can have. Consider an atom of an iron, an electron in it initially assigned to move in a certain orbit. But if it had the enough energy; it could reach the exited state and jump to a closer orbit. So, one of the light photons passing through can provide the needed energy. But there is one thing; the energy from the photon has to be the right amount to get an electron to the next orbit. Otherwise, it will just let the photon pass by, and it just so happens that in glass, the electrons are placed so far from each other, that the photons of visible light canโ€™t provide enough energy for an electron. Photons from ultra violet light give just the right amount of energy, and are absorbed. Thatโ€™s why you canโ€™t get a suntan through glass. This amazing property of being both solid and transparent has given glass many uses throughout the centuries.

 In the 1950s Sir Alastair Pilkington introduced โ€˜float glass productionโ€, a revolutionary method still used to make glass. Other developments have included safety glass, heat resistant glass, and fiber optics, where light pulses are sent along thin fibers of glass. Fiber optic devices are used in telecommunications and in medicine for viewing inaccessible parts of the human body.

Are perpetual motion machines possible or not? Free energy?

Most of us might have had this idea, that magnets attract each other in opposite poles, so why canโ€™t we use this to create free energy. Like placing a magnet or a metal in a car and attach the other magnet with a rod or something and place it in front of the car that keeps them attract each other. With this idea, we can move the car without any energy, forever. A perpetual motion machine is a device that is supposed to work indefinitely without any external energy source. Imagine a windmill that produced the breeze it needed to keep rotating or a light bulb whose glow provided its own electricity. These devices have captured many inventersโ€™ imaginations because they could transform our relationship with energy. It sounds cool right? But there is only one problem, it wonโ€™t work.

Bhaskara’s wheel -The oldest perpetual motion machine

In countless instances in history, people have claimed that they have made a perpetual motion machine. Around 1159 A.D. a mathematician called Bhaskara the learned sketched a design for a wheel containing curved reservoirs of mercury. He reasoned that as the wheels spun, the mercury would flow to the bottom of each reservoir, leaving one side of the wheel perpetually heavier than the other. The imbalance would keep the wheel turning forever. Bhaskaraโ€™s drawing was one of the earliest designs for a perpetual motion machine. And more people have claimed that they made a perpetual motion machine, like Zimaraโ€™s self blowing windmill in the1500s, the capillary bowl where capillary action forces the water upwards, the oxford electric bell, which takes back and forth due to charge repulsion, and so on. In fact the US patent office stopped granting patents for perpetual motion machines without a working prototype.

Why perpetual motion machines wonโ€™t work?

Ideas of perpetual motion machine all violate one or more fundamental laws of thermodynamics. These laws describe the relationship between different forms of energy. The first law of thermodynamics says that โ€œEnergy neither be created nor be destroyedโ€. You canโ€™t get out more energy than you put in. that rules out a useful perpetual motion machine right away because a machine could only ever produce as much as it consumed. There wouldnโ€™t be any leftover energy to power a car or charge a phone. But what if you just wanted the machine to keep itself moving? Letโ€™s take the Bhaskaraโ€™s wheel, the moving parts that make one side of the wheel heavier also shift its center of mass downward below the axle. With a low center of mass, the wheel just swings back and forth like a pendulum and will stop. In the 17th century, Robert Boyle came up with an idea for a self watering pot. He theorized that capillary action, the attraction between liquids and surfaces that pulls water through thin tubes, might keep the water cycling around the bowl. But if the capillary action is strong enough to overcome gravity and draw the water up, it would also prevent it from falling back into the bowl.

John Keely’s perpetual motion machine

For each of these machines to keep moving, they had to create some extra energy to nudge the system past its stopping point, breaking the first law of thermodynamics. There are ones that seems to keep moving, but in reality, they invariably turn out to be drawing energy from some external source. Even if engineers could design a machine that didnโ€™t violate the first law of thermodynamics, it still wouldnโ€™t work in the real world because of the second law. The second law of thermodynamics tells us that energy tends to spread out through processes like friction, heating. Any real machine would have moving parts or interactions with air or liquid molecules that would generate tiny amount of friction and heat, even in a vacuum. That heat is energy escaping, and it would keep leeching out, reducing the energy available to move the system itself until the machine inevitably stopped. Like I said about the idea of a car with magnets, the magnets in it wonโ€™t able to move the car. Even if the magnet is so powerful to move the car, the friction came into action and will eventually stops the car. So these two laws of thermodynamics will destroy every idea for perpetual motion. With these, we can conclude that perpetual motion machines are impossible.

  YOU  CANโ€™T  GET  SOMETHING  FOR  NOTHING.

Emergence of steam engines -The Industrial Revolution

Thomas Newcomen, a Devonshire blacksmith, developed the first successful steam engine in the world and used it to pump water from mines. His engine was a development of the thermic siphon built by Thomas Savery, whose surface condensation patents blocked his own designs. Newcomenโ€™s engine allowed steam to condense inside a water-cooled cylinder, the vacuum produced by this condensation being used to draw down a tightly fitting piston that was connected by chains to one end of a huge, wooden, centrally pivoted beam. The other end of the beam was attached by chains to a pump at the bottom of the mine. The whole system was run safely at near atmospheric pressure, the weight of the atmosphere being used to depress the piston into the evacuated cylinder.

 Newcomenโ€™s first atmospheric steam engine worked at conygree in the west midlands of England. Many more were built in the next seventy years, the initial brass cylinders being replaced by larger cast iron ones, some up to 6 feet (1.8 m) in diameter. The engine was relatively inefficient, and in areas where coal was not plentiful was eventually replaced by double-acting engines designed by James Watt. These used both sides of the cylinder for power strokes and usually had separate condensers. James watt was responsible for some of the most important advances in steam engine technology.

James Watt’s steam engine, 1764

In 1765 watt made the first working model of his most important contribution to the development of steam power, he patented it in 1769. His innovation was an engine in which steam condensed outside the main cylinder in a separate condenser. The cylinder remained at working temperature at all times. Watt made several other technological improvements to increase the power and efficiency of his engines. For example, he realized that, within a closed cylinder, low pressure steam could push the piston instead of atmospheric air. It took only a short mental leap for watt to design double-acting engine in which steam pushed the piston first one way, then the other, increasing efficiency still further.

Wattโ€™s influence in the history of steam engine technology owes as much to his business partner, Matthew Boulton, as it does to his own ingenuity. The two men formed a partnership in 1775, and Boulton poured huge amount of money into wattโ€™s innovations. From 1781, Boulton and watt began making and selling steam engines that produced rotary motion. All the previous engines had been restricted to a vertical, pumping action. Rotary steam engines were soon the most common source of power for factories, becoming a major driving force behind Britainโ€™s industrial revolution.

The world’s first locomotive by Richard Trevithick,1804

By the age of nineteen, Cornishman Richard Trevithick worked for the Cornish mining industry as a consultant engineer. The mine owners were attempting to skirt around the patents owned by James Watt. William Murdoch had developed a model steam carriage, starting in 1784, and demonstrated it to Trevithick in 1794. Trevithick thus knew that recent improvements in the manufacturing of boilers meant that they could now cope with much higher steam pressure than before. By using high pressure steam in his experimental engines, Trevithick was able to make them smaller, lighter, and more manageable.

Trevithick constructed high pressure working models of both stationary and locomotive engines that were so successful that in 1799 he built a full scale, high pressure engine for hoisting ore. The used steam was vented out through a chimney into the atmosphere, bypassing wattโ€™s patents. Later, he built a full size locomotive that he called puffing devil. On December 24, 1801, this bizarre-looking machine successfully carried several passengers on a journey up Camborne hill in Cornwall. Despite objections from watt and others about dangers of high pressure steam, Trevithickโ€™s work ushered in a new era of mechanical power and transport.

Are electric cars the future? The success story of Tesla

In 1834, Robert Anderson of Scotland created the first electric car carriage. The following year, a small electric car was built by the team of professor Stratingh of Groningen, Holland and his assistant, Christopher Becker. More practical electric vehicles were brought onto the road by both American Thomas davenport and Scotsman Robert Davidson in 1842. Both of these inventors introduced non rechargeable electric cells in the electric car. The Parisian engineer Charles Jentaud fitted a carriage with an electric motor in 1881. William Edward Ayrton and John Perry, professors at the Londonโ€™s city and guilds institute, began road trails with an electrical tricycle in 1882; Three years later a battery-driven electric cab serviced Brighton.

Electric cars during 1890s in the United States

Around 1900, internal combustion engines were only one of three competing technologies for propelling cars. Steam engines were used, while electric vehicles were clean, quiet, and did not smell. In the United States, electric cabs dominated in major cities for several years. The electric vehicle did not fail because of the limited range of batteries or their weight. Historian Michel Schiffer and others maintain, rather, that failed business strategies were more important. Thus, most moor cars in the twentieth century relied on internal combustion, except for niche applications such as urban deliveries. At the end of the century, after several efforts from small manufactures, general motorsโ€™ made available on all electric vehicle called EV1 from 1996 to 2003. In the late 1990s, Toyota and Honda introduced hybrid vehicles combining internal combustion engines and batteries.

How Tesla was created?

Entrepreneur Elon Musk is the man behind many modern innovations. It includes the digital payment service PayPal, the independent space travel company SpaceX, and the electric car company Tesla motors. Tesla motors is named after Nikola Tesla, a Serbian American inventor who contributed to the development f alternating current electricity. In 2003 two Silicon Valley engineers, Martin Eberhard and Marc Tarpenning sold their eBook business for 187 million dollars and started Tesla to build a greener car. Elon musk joined as an early investor leading the series finance and taking on several other roles as well. Teslaโ€™s plan was simple but potentially genius. They focused on lithium-ion batteries which they expected to get cheaper and more powerful for many years. They planned to start their journey with a high margin, high performance sports car. Tesla also planned to integrate energy generation and storage in the home and develop other emerging technologies like autonomous vehicles.

Tesla Gigafactory in Tilburg, Netherlands

With this plan set, the company was ready to build a high performance low volume sports car, the roadster. Finally in 2008 Tesla motors released its first car, the completely electric roadster. In 2008, Martin and Marc left the company, and eventually Elon Musk took over as CEO. He made drastic changes, raising 40 million of debt financing and borrowed 465 million from the US government. In 2012 Tesla started focusing on two new cars, model S and model X. beginning in 2012, Tesla built stations called supercharges in the United States and Europe, designed for charging batteries quickly and at no extra cost to Tesla owners. These two models were poised for success but the high cost of lithium ion batteries made it a luxury item. To compensate this, in 2013, Tesla began building large factories called Gigafactories to produce lithium ion batteries and cars n large scale. It made Tesla cars ultimately cheaper than gas powered vehicles. Then Tesla gave autopilot system for its model S which gives semi autonomous capacities. By the end of 2017 Tesla passed ford in market value. Tesla released another crossover he model Y, in 2020. The model Y was smaller and less expensive than the model X and shared many parts with the model 3. Tesla announced several models to be released in the future, including a second version of the Roadster, a Semi trailer truck, a Pick-up truck and the Cybertruck.

The Journey of rocket science from 900 A.D till Now

The history of rocketry dates back to around 900 C.E., but the use of rockets as highly destructive missiles able to carry large payloads of explosives was not feasible until the late 1930s. War has been the catalyst for many inventions, both benevolent and destructive. The ballistic missile is intriguing because it can be both of these things. It has made possible some of the greatest deeds mankind has ever achieved, and also some of the worst. German Walter Dornberger and his team began developing rockets in 1938, but it was not until 1944 that the first ballistic missile, the aggregate-4 or V-2 rocket, was ready for use. V-2 was used extensively by the Nazis at the end of World War II, primarily as an error weapon against civilian targets. They were powerful and imposing: 46 feet (14m) long, able to reach speeds of around 3,500 miles per hour (5600 kph) and deliver a warhead of around 2,200 pounds (1000 kg) at a range of 200 miles (320 km).

The German V-1 of World War II was the world’s first guided missile.

Ballistic missiles follow a ballistic flight path, determined by the brief initial powered phase of the missileโ€™s flight. This is unlike guided missiles, such as cruise missiles, which are essentially unmanned airplanes packed with explosives. This meant that the early V-2 flew inaccurately, so they were of most use in attacking large, city sized targets such as London, Paris, and Antwerp. The Nazi ballistic missile program has had both a great and a terrible legacy. Ballistic missiles such as the V-2 were scaled up to produce intercontinental ballistic missiles with a variety of warheads, but also the craft that have carried people into space. Ballistic missiles may have led us to the point of self destruction, but to venture beyond our atmosphere.

The intercontinental ballistic missiles (ICBM)

 Intercontinental ballistic missiles (ICBM) were first developed by the United States in 1959. It is a guided ballistic missile with a minimum range of 5500 kilometres primarily designed for nuclear weapon. United States, China, France, India, United Kingdom and North Korea are the only countries that have operational ICBMs. The ICBMs has a three stage booster, during the boost phase the rocket get the missile airborne, this phase last around 2 to 5 minutes until the ICBM has reached space. ICBMs have up to three rocket phases with each one ejected or discarded after it burns out.

The DF-41 is currently the most powerful Intercontinental Ballistic Missile (ICBM), developed in China

They use either liquid or solid propellant. The Liquid fuel rockets tend to burn longer in the boost phase than the solid propellant. The second phase of the ICBMs is the point where the rocket has reached space, here it continues along is ballistic trajectory. At this point the rocket will be travelling anywhere from 24,140 and 27,360 kilometres an hour. The final phase is the ICBMโ€™s final separation and re- entry into earthโ€™s atmosphere. The nose cone section carrying the warhead separates from the final rocket booster and drops back to earth. If the ICBM has rocket thrusters, those will be used at this point to orient itself towards the target. It is important that ICBMs have adequate heat shields to survive reentry, if not they burn up and fall apart. Itโ€™s important to note that although countries have ICBMs, none have ever been fired in anger against another country.

โ€œThis third day of October, 1942, is the first of a new era in transportation that of space travel.โ€ –  Walter Dornberger

The James Webb Space Telescope- World’s most powerful telescope

The James Webb space telescope or JWST will replace the Hubble space telescope. It will help us to see the universe as it was shortly after the big bang. It was named after the second head of NAS James Webb. James Webb headed the office of space affairs from 1961 to 1968. This new telescope was first planned for launch into orbit in 2007 but has since been delayed more than once, now itโ€™s been scheduled for 18 December 2012. After 2030 the Hubble will go on a well deserved rest since its launch in 1990 its provided more than a million images of thousands of stars, nebulae, planets and galaxies. The Hubble captured images of stars that are show about 380 million years after the big bang which supposedly happened 13.7 billion years ago. These objects may no longer exist, we still see their light. Now we expect James Webb to show us the universe as it was only 100 to 250 million years after its birth. It can transform our current understanding of the structure of the universe. The Spitzer space telescope and Hubble telescopes have collected data of gas shells of about a hundred planets. According to experts, the James Webb is capable of exploring the atmospheres of more than 300 different exoplanets.

The main mirror- A giant honeycomb consisting of 18 sections.

The working of James Webb space telescope

The James Webb is an orbiting infrared observatory that will investigate the thermal radiation of space objects. When heated to a certain temperature, all solids and liquids emit energy in the infrared spectrum; here there is a relationship between wavelength and temperature. The higher the temperature, there will shorter the wavelength and higher the radiation intensity. James Webb sensitive equipment will be able to study the cold exoplanets with surface temperatures of up to 27ยฐ Celsius. An important quality of this new telescope is that it will revolve around the sun and not the earth unlike Hubble which is located at an altitude of about 570 kilometers in low earth orbit. With the James Webb orbiting the sun, it will be impossible for the earth to interfere with it, however he James Webb will move in sync with the earth to maintain strong communication yet the distance from the James Webb to the earth will be between about 374,000 to 1.5 million kilometers in the direction opposite of the sun. So its design must be extremely reliable.

The James Webb telescope weighs 6.2 tones. The main mirror of the telescope is with a diameter of 6.5 meters and a colleting area of 25 square meters, it resembles a giant honeycomb consisting of 18 sections. Due to its impressive size, the main has to be folded for start up; this giant mirror will capture light from the most distant galaxies. The mirror can create a clear picture and eliminate distortion. A special type of beryllium was used in the mirror which retains its shape at low cryogenics temperature. The front of the mirror is covered with a layer of 48.25 grams of gold, 100 nanometers thick; such a coating best reflects infrared radiation. A small secondary mirror opposite the main mirror, it receives light from the main mirror and directs it to instruments at the rear of the telescope. The sunshield is with a length of 20 meters and width of 7 meters. It composed of very thin layers of kapton polyimide film which protects the mirror and tools from sunlight and cools the telescopeโ€™s ultra sensitive matrices to 220ยฐ Celsius.

The NIRCam- Near Infrared Camera is the main set of eyes of the telescope, with the NIRCam we expect to be able to view the oldest stars in the universe and he planets around them. The nurse back near infrared spectrograph will collect information on both physical and chemical properties of an object. And the MIRI mid-infrared instrument will allow you to see stars being born many unknown objects of the Kepler belt. Then the near infrared imager and sliteless spectrograph or NIRIIS camera is aimed at finding exoplanets and the first light of distant objects. Finally the FGS- Fine Guidance Sensor helps accurately point the telescope for higher quality images updates its position in space sixteen times per second and controls the operation the steering and main mirrors. They are planning to launch the telescope with the help of the European launch vehicle Ariana 5 from the kourou Cosmodrome in French Guiana space center. The device is designed for between 5 to 10 years of operation but, it may serve longer. If everything goes well, $10 billion worth of construction and one year of preparation will have finally started in orbit.

The deepest image of universe ever taken- Hubble Space Telescope

The Hubble space telescope is the most famous telescope in the world. It was named after the famous astronomer Edwin Hubble who changed our understanding of the universe proving the existence of other galaxies. It is an automatic observatory, has discovered millions of new objects in space. It helped us to witness the birth of new stars, found planets outside the solar system and see super massive black holes. Hubble was launched in 1990, and from December 1993 to may 2009, the telescope was repaired and updated four times. Astronauts visited HST five times in order to make repairs and new instruments.

Hubble holds the record for the longest range of observation. The light from the most distant galaxies has taken billions of years to travel across the universe and reach Hubble. By taking this picture, Hubble was literally looking back in time to the very early universe. You can notice on the right side of the image, there is a galaxy very much like the Milky Way that galaxy is about five billion years away, so we are looking back in time by five billion years. In March 4th, 2016, NASA releases a historic image, one that many believed was impossible. It captured the farthest away of all known galaxies; itโ€™s located about 13.4 billion light years away from us. The light from his galaxy has just reached the earth crossing the distance that separates us; hat is now we can observe it as it was 400 million years after the big bang. This galaxy is 25 times smaller than our galaxy, the Milky Way.  It helped to find the age for the universe now known to be 13.8 billion years, roughly three times the age of earth.

This view of nearly 10,000 galaxies is called the Hubble Ultra Deep Field. The snapshot includes galaxies of various ages, sizes, shapes, and colours. The smallest, reddest galaxies, about 100, may be among the most distant known, existing when the universe was just 800 million years old. The nearest galaxies – the larger, brighter, well-defined spirals and ellipticals – thrived about 1 billion years ago, when the cosmos was 13 billion years old. The image required 800 exposures taken over the course of 400 Hubble orbits around Earth. The total amount of exposure time was 11.3 days, taken between Sept. 24, 2003 and Jan. 16, 2004.

With the advanced camera of the NASAโ€™s Hubble space telescope, it discovered a new planet called Fomalhaut b which orbiting is parent star Fomalhaut. Fomalhaut is 2.3 times heavier and 6 times larger than the sun around it is a disc of cosmic dust which creates the resemblance of an ominous eye. Fomalhaut b lies 1.8 billion miles inside the ringโ€™s inner edge and orbits 10.7 billion miles from its star. Astronomers have calculated that Fomalhaut b completes an orbit around its parent star every 872 years. The Fomalhaut system is 25 light years away in the constellation Piscis Australis. But in April 2020, astronomers began doubting its existence; the planet is missing in the new Hubble pictures. Scientists believe that this planet was a cloud of dust and debris formed as a result of a collision of two icy celestial bodies.

Fomalhaut – The the brightest star in the constellation ofย Piscis Austrinus

In 1994, Hubble captured the most detailed image of the iconic feature called the pillars of creation. The pillars of creation are fascinating but relatively small feature of the entire eagle nebula. The blue color in the image represent oxygen, red is sulfur, and green represents both nitrogen and hydrogen. The nebula was discovered in 1745 by the Swiss astronomer jean Philippe Loys de Cheseaux, is located 7,000 light years from earth in the constellation Serpens. During its work Hubble has presented millions of images but unfortunately NASA has suspended missions to repair and modernize the telescope. It is assumed that in 2021, Hubble will be replaced with the new James Webb space telescope.

How advanced are Alien civilizations? -The Kardashev scale

The observable universe is consists up to two trillion galaxies that are made of billions and billions of stars. In the Milky Way galaxy alone, scientists assume that there are some 40 billion earths like planets in the habitable zone of their stars. When you look at these numbers, there are a lot of possibilities of alien civilization to exist. In a universe that big and old, the possibilities of civilizations may start millions of years apart from each other, and develop in different directions and speed. So their civilization may range from cavemen to super advanced. We know that human started out with nothing and then making tools, building houses, etc. we know that humans are curios, competitive, greedy for resources, and expansionists. The more of these qualities that our ancestors had, the more successful they were in the civilization building process.

ย Like this, the other alien civilizations also might have evolved. Human progress can be measured very precisely by how much energy we extracted from our environment. As our energy consumption grew exponentially, so did the abilities of our civilization. Between 1800 and 2015, population size had increased seven fold; while humanity was consuming 25 times more energy. Itโ€™s likely that this process will continue into the far future. Based on these facts, scientist Nikolai Kardashev developed a method for categorizing civilizations, from cave dwellers to gods ruling over galaxies into a scale called the Kardashev scale. It is a method of ranking civilizations by their energy use. It put civilizations into four categories. A type 1 civilization is able to use the available energy of their home planet. A type 2 civilization is able to use the available energy of their star and planetary system. A type 3 civilization is able to use the available energy of their galaxy. A type 4 civilization is able to use the available energy of multiple galaxies

. Itโ€™s like comparing an ant colony to a human metropolitan area. To ants we are so complex and powerful, we might as well be gods. On the lower end of the scale, there are type 0 to type 1 civilization. Anything from hunting, gatherers to something we could achieve in the next few hundred years. These might actually be abundant in the Milky Way. If that possible, why they are not sending any radio signals in space. But even if they transmitted radio signals like we do, it might not be very helpful. In such a vast universe, our signals may extend over 200 light years, but this is only a tiny fraction of the Milky Way. And even if someone were listening, after a few light years our signals decay into noise, impossible to identify as the source of an intelligent species. Today humanity ranks at about level 0.75. We created huge structures, changed the composition and temperature of the atmosphere. If progress continues, we will become a full type 1 civilization in the next few hundred years. The next step to type 2 is trying and mine other planets and bodies.

The Dyson sphere – mega-structures built around sun to draw energy

ย As a civilization expands and uses more and more stuff and space, at some they may start a largest project that extracting the energy of their star by building a Dyson swarm. Once it finished, energy has become unlimited. The next frontier moves to other stars light years away. So the closer a species gets to type 3, they might discover new physics, may understand and control dark matter and energy, or be able to travel faster than light. For them, humans are the ants, trying to understand the galactic metropolitan area. A high type 2 civilization might already consider humanity too primitive. A type 3 civilization might consider us bacteria. But the scale doesnโ€™t end here; some scientists suggest there might be type 4 and type 5 civilizations, whose influences stenches over galaxy clusters or super clusters. This complex scale is just a thought experiment but, still it gives interesting things. Who knows, there might be a type omega civilization, able to manipulate the entire universe, and they even might be the actual creators of our universe.

“Somewhere, something incredible is waiting to be known.” – Carl Sagan

How we measure extreme distances in space – Light years

In the 1800s, scientists discovered the realm of light beyond what is visible. The 20th century saw dramatic improvements in observation technologies. Now we are probing distant planets, stars, galaxies and black holes where even light would take years to reach. So how we do that? Light is the fastest thing we know in the universe. It is so fast that we measure enormous distances by how long it takes for light to travel them. In one year, light travels about 6 trillion miles. It is the distance, we call one light year. The Apollo 11 had to travel four days to reach the moon but, it is one light second from earth. Meanwhile, the nearest star beyond our own sun is Proxima Centauri but, it is 4.24 light years away. Our Milky Way galaxy is on the order of 100,000 light years across. The nearest galaxy to our own, Andromeda is about 2.5 million light years away.

 The question is how do we know the distance of these stars and galaxies? For objects that are very close by, we can use a concept called trigonometric parallax. When you place your thumb and close your left eye and then, open your left eye and close your right eye. It will look like your thumb has moved, while more distant objects have remained in place. This same concept applies in measuring distant stars. But they are much farther than the length of your arm, and earth is not large enough, even if you had different telescopes across the equator, you would not see much of a shift in position. So we look at the change in the starโ€™s apparent location over six months, when we measure the relative positions of the stars in summer, and then again in winter, nearby stars seem to have moved against the background of the more distant stars and galaxies.

 But this method only works for objects less than a few thousand light years away. So, for such distances, we use a different method using indicators called standard candles. Standard candles are objects whose intrinsic brightness, or luminosity that we know well. For example, if you know how bright your light bulb is, even when you move away from it, you can find the distance by comparing the amount of light you received to the intrinsic brightness. In astronomy, we consider this as a special type of star called a Cepheid variable. These stars will constantly contract and expand. Because of this, their brightness varies. We can calculate the luminosity by measuring the period of this cycle, with more luminous stars changing more slowly. By comparing the light that we received to the intrinsic brightness we can calculate the distance.

The Type 1a Supernovae – Death of a star

 But we can only observe individual stars up to about 40 million light years away. So we have to use another type of standard candle called type 1a supernova. Supernovae are giant stellar explosions which is one of the ways that stars die. These explosions are so bright, that they outshine the galaxies where they occur. So we can use the type 1 a supernovae as standard candles. Because, intrinsically bright ones fade slower than fainter ones. With the understanding of brightness and decline rate, we can use the supernovae to probe distances up to several billions of light years away. But is the importance of seeing distant objects? Well, the light emitted by the sun will take eight minutes to reach us, which means that the light we see now is a picture of the sun eight minutes ago. And the galaxies are million light years away. It has taken millions of years for that light to reach us. So the universe is in some kind of an inbuilt time machine. The further we can look back, the younger we are probing. Astrophysicists try to read the history of the universe, and understand how and where we come from.

โ€œDream in light years, challenge miles, walk step by stepโ€ โ€“ William Shakespeare

How powerful is a hydrogen bomb? And how it works?

At 8:15 on the morning of 6th august 1945, all people saw was a blinding light followed by complete darkness and destruction. It was the most powerful weapon ever created by mankind. It unleashed energy and radiation that killed a hundred and forty thousand people in the industrial city of Hiroshima, Japan. Today we have thermonuclear weapons, also called as the hydrogen bomb. Edward Teller, a Hungarian physicist, worked on the Manhattan project to produce the first atomic bomb based uranium fission, teller had long been interested in a hydrogen fusion bomb, but secrecy and the lack of access to computers contributed to slow progress. Stanislaw Ulam, a polish mathematician realized that a fission bomb could be used as a trigger for a fusion reaction. It is believed that teller seized on this for what became, in 1951, the โ€œTeller-Ulamโ€ design. Most sources agree that the H-bomb works in a series of stages, occurring in microseconds, one after the other. A narrow metal case houses two nuclear devices separated by polystyrene foam. One is ball shaped, the other is cylindrical. The ball is essentially a standard atomic fission bomb. When this is detonated, high energy radiation rushes out ahead of the blast.

The first H-Bomb test took place onย November 1, 1952ย on the small Pacific island of Elugelab.

How a hydrogen bomb works?

The first hydrogen bomb released the energy equivalent of 10 million tons of TNT. While the atomic bomb works on the principle of releasing energy through splitting of atoms called fission, a hydrogen bomb works by fusion of atoms together and it produce more energy than the atom bomb. Fusion is more powerful than fission. It is the same process that powers our sun. And when fission is combined with fusion in hydrogen, it creates energy orders of magnitude higher than fission alone which makes the hydrogen bomb hundreds to thousands of times more powerful than atomic bombs. The fusion portion of the bomb creates energy by combining two isotopes of hydrogen called deuterium and tritium to create helium. Unlike a natural hydrogen atom that is made of one electron orbiting around one proton, these isotopes have extra neutrons in the nuclei. A large amount of energy is released when these two isotopes fuse together to form helium, because a helium atom has much than these two isotopes combined. This excess energy is released. One of the main problems with creating the hydrogen bomb was obtaining the tritium. Scientists found that they can generate this inside the hydrogen bomb with a compound combining lithium and deuterium.

Scientist chose hydrogen for fusion, because it has only one proton and thus would have less electrical charge than atoms with multiple protons in their nuclei. It is possible to combine nuclei when the temperature is increased. Temperatures needed are astronomically higher than ever that at the center of our sun โ€“ 100 million degree Celsius. The center of the sun is 15 million degrees.  At this temperature the isotopes become a form of matter called plasma. Now the electrons orbiting are stripped away from the nucleus. At this temperature the nuclei combined with each other and form a helium nucleus and a free neutron. But how is a temperature of 100 million degrees achieved? This is where the fission or atomic bomb is inside the hydrogen bomb enclosures comes into play. This fission provides the energy needed to heat up the fusion reaction. A hydrogen bomb is actually three bombs in one. It contains an ordinary chemical bomb, a fission bomb and the fusion bomb. The chemical bomb initiates the fission bomb, which initiates the fusion bomb. All these events happens in only about 600 billionths of a second, 550 billionths of a second for the fission bomb implosion, and 50 billionths of a second for the fusion bomb. The result is an immense explosion with a 10 million ton yield, 700 times more powerful than an atom bomb. Only six countries have such bombs, china, France, India, Russia, the United Kingdom, and the United States. The world now has over 10,000 such bombs capable of easily destroying every single person on earth many times over.

โ€œI donโ€™t know what weapons countries might use to fight world war III, but wars after that will be fought with sticks and stonesโ€. โ€“ Albert Einstein

The Atomic Bomb – How it changed the course of history?

During the World War II the united sates used an unprecedented $2 billion to feed an ultra-secret research and development program, the outcome of which would alter the relationships of nations forever. Known as the Manhattan project, it was the search by the United States and her closest allies to create a practical atomic bomb. It is a single device which capable of mass destruction, the threat of which alone could be powerful enough to end the war. The motivation was simple. Scientists escaping the Nazi regime had revealed that research in Germany had confirmed the theoretical viability of atomic bombs. In 1939, in support of their fears that the Nazis might now be developing such a weapon, Albert Einstein and others wrote to President Franklin D. Roosevelt (FDR) warning of the need for atomic research. By 1941 FDR had authorized formal, coordinated scientific research into such a device. Among those efforts would ultimately unleash the power of the atom was Robert Oppenheimer, who was appointed the projectโ€™s scientific director in 1942. Under his direction the famous laboratories at Los Alamos would be constructed and the scientific team assembled. On July 16 1945, in a small town called Alamogordo, New Mexico, the course of human history was changed; the first atomic bomb was detonated that day.

The Little Boy- The world’s first atomic bomb detonated at 5:30 A.M on July 16 1945, Los Alamos, New Mexico

Principle of an atomic bomb

An atom bomb works in the principle that when you break up a nucleus of an atom, a large amount of energy is released. Because it takes a large amount of energy to keep the nucleus bound together. When you split it apart, the energy is released. Scientists chose the biggest and heaviest nucleus that is found in nature to be the best object for splitting. It is uranium, it is unique in that one of its isotopes is the only naturally occurring element on that is capable of sustaining a nuclear fission reaction. A uranium atom has 92 protons and 146 neurons together to give an atomic mass of 238 or U238. A very small portion of uranium, when it is mine, is in the form of an isotope U235, this isotope  has the same 92 protons but only 143 neutrons, or three fewer than U238. U235 is highly unstable, which makes it highly fissionable. When uranium U235 is slammed by a neutron, it becomes uranium 236. In the process of splitting and creating two more stable atoms, a whole bunch of energy is released, along with three more neutrons. These three more neutrons fly out and slam more U235 atoms. And thus, a chain reaction occurs, causing more and more U235 to be split, and ultimately causes a huge explosion. The uranium contains only 0.7% of this U235 isotope, and a whole bunch of it is needed to make one atomic bomb.

Another engineering challenge is to create a vessel with the correct shape and material to contain the neutrons after fissioning, so that they do not escape, but rather cause more atoms to fission. And it is lined with a special mirror so that it forces neutrons back in to the fissionable material rather than escape the vessel. Then the correct amount of fissionable material has to be placed inside this vessel. This is called โ€˜super critical massโ€™. There has to be enough mass to sustain an uncontrollable chain reaction resulting in an explosion. ย The super critical mass has to be kept apart until you are ready for an explosion. Otherwise an explosion can occur when you donโ€™t want it. The reason is because these isotopes are unstable, and are throwing off neurons randomly. In an atomic bomb, two sub-critical masses are slammed together usually with a conventional bomb contained inside the outer bomb. This conventional explosive charge initiates the chain reaction. This project ultimately created the first, man-made nuclear explosion, which Robert Oppenheimer called โ€œtrinityโ€ on July 16, 1945. The concept of an atom bomb is simple but, the process of actually creating a bomb is not so simple.

โ€œNow I am became Death, the destroyer of worlds.โ€ โ€“ J. Robert Oppenheimer

3 Great inventions with simple engineering techniques – Catseyes, Fountain pen, Safety pin

Cats eyes โ€“ Road Reflectors

When you are driving at night, you see reflective objects on road. These objects reflect the light and guide people to drive safely. It may seem so simple, but the idea of this invention came in an interesting way. One night in 1933 when the road mender Percy Shaw was driving home in Yorkshire, he saw the light of his car headlamps reflected in the eyes of a cat beside the road. This gave Shaw the inspiration that by replicating this effect he could produce a practical way of helping drivers navigate poorly lit roads. Shawโ€™s challenge was to create a device bright enough to illuminate roads at night, robust enough to cope with cars constantly driving across it, and that also required minimum maintenance. Shaw came up with a small device that could be inserted into the road as a marker. It consisted of fur glass beads placed in two pairs facing in opposite directions, embedded in a flexible rubber dome. When vehicles drove over the dome, the rubber contracted and the glass beads dropped safely beneath the road surface. The device was even self-cleaning. The cast-iron base collected rainwater and whenever the top of the dome was depressed, the rubber would wash the water across the glass beads to cleanse away any grime, just as the eye is cleaned by tears. The patent for the catseye was registered in 1934. And in 2001 the product was voted the greatest design of the twentieth century, ahead even of Concorde.

Percy Shaw – Catseyes in 1934

Fountain pen

The invention of the modern fountain pen is really more a story of perfection than invention. In 1883, more than fifty years after the fountain pen was first invented, a New York insurance broker, Lewis waterman, was set to sign an important contract and decided to honor the occasion by using the standard ink-filled pen of the day. However, fountain pens were notoriously unreliable, especially in their capacity to regulate their ink low, so that it could not be signed, waterman decided to do something about it. Within a year Lewis waterman had designed the worldโ€™s first practical, usable, and virtually leak proof fountain pen. To regulate the flow of ink he successfully applied the principle of capillary action, with the inclusion of a tiny air hole in the nib of the along with grooves in the feeder mechanism to control the flow of ink from his new leak proof reservoir to the rib.

As early as the beginning of the eighteenth century, the chief instrument-marker to the king of France, M.Bion, crafted fountain pens with nibs, five of which survive to this day. The first steel pen point was manufactured in 1828, thought to be invented by Petrache Poenaru, and in the 1830s the invented James Perry had several unsuccessful attempts at designing nibs that employed the principle of capillary action. But it was Lewis waterman who overcome every obstacle and crafted a successful pen. It was so successful that by 1901, two years after watermanโ€™s death, more than 350,000 pens of his design were sold worldwide.

Safety pin

When it comes to simple engineering, we canโ€™t avoid safety pin. This useful object is found in households across the globe, it even gained status as a fashion accessory, with the movement in 1970s. Walter Hunt was a New York mechanic who, in 1849, sat wondering how to pay off a $15 loan. He spent around three hours twisting a length of wire in his fingers before he created the answers to his problems, the humble safety pin. Pins were by no means a new idea, having existed for centuries before Walterโ€™s twist on the design. However, his creation was unique as it provided a solution to the potential problem of pricking oneself with the old style variety. His pin has a clip at the top which locks the pin and keeps us safe from not pricking. At the bottom it has a spring like structure made by bending the same pin to maintain the tension of the pin. Hunts design was patented in April 1849, and he sold the rights to his creditor, clearing a $385 profit. Unfortunately hunt had no idea how popular his invention was set to become. Even after 150 years, we are using this safety pin which works on a very simple engineering.  He also designed Americaโ€™s first sewing machine with an eye pointed needle. But fearing the loss of jobs his creation may cause, he did not patent the idea. It was left to a fellow American, Elias Howe, to claim the credit for this invention some twenty years later.

โ€œA man who could invent a safety pin . .  . was truly a mechanical genius . . .โ€ โ€“ New York Times

Evolution of motorcars mechanism

It is difficult, to imagine a world without the motorcar. Back in the 1700s, some of the very first cars were powered by stream engines. When German engineer Karl Benz drove a motorcar tricycle I 1885 and fellow Germans Gottlieb Daimler and Wilhelm Maybach converted a horse down carriage into a four wheeled motorcar in august 1886, none of them could have imagined the effects of their invention. Benz recognized the great potential of petrol as a fuel. His three wheeled car had a top speed of just ten miles (16 km) per hour with its four-stroke, one cylinder engine. After receiving his patent in January 1886, he began selling the Benz Velo, but the public doubted its reliability. Benzโ€™s wife Bertha had a brilliant idea to advertise the new car. In 1886 she took it on a 60 mile (100 km) trip from Mannheim to near Stuttgart. Despite having to push the car up hills, the success of the journey proved to a skeptical public that this was a reliable mode of transport.

Daimler and Maybach did not produce commercially feasible cars until 1889. Initially the German inventions did not meet with much demand, and it was French companies like Panhard at Levassor that redesigned and popularized the automobile. In 1926 Benzโ€™s company merged to form the Daimler Benz company. Benz had left his company in 906 and, remarkably, he and Daimler never met. Due to higher incomes and cheaper, mass produced cars, the United States led in terms of motorization for much of the twentieth century. This kind of movement has, however, come at a cost. Some 25 million people are estimation to have died in car accidents worldwide during the twentieth century. Climate changing exhaust gases and suburban sprawl are but two more of the consequences of a heavy reliance on the automobile.

Karl Benz with his wife Bertha, the first motor car (1885)

Invention of the clutch

Almost all historians agree that clutch was developed in Germany in the 1880s. Daimler met Maybach while they were working for Nikolaus Otto, the inventor of the internal combustion engine. In 1882 the two set up their own company, and from 1885 to 1886 they built a four-wheeled vehicle with a petrol engine and multiple gears. The gears were external, however, and engaged by winding belts over pulleys to drive each selected gear. In 1889, they developed a closed four- speed gearbox and a friction clutch to powers the gears, this car was the first to be marketed by the Daimler motor campy in 1890. Without a clutch, if the car engine is running the wheels keep turning. For the car to stop without stalling, the wheels and engine must be separated by a clutch. A friction clutch consists of a flywheel mounted to engine side. The clutch originates from the drive shaft and is a large metal plate covered with a frictional material. When the flywheel and clutch make contract, power is then transmitted to the wheels.

Patent drawings of first motorcar – Karl Benz

Gears in Motorcars

Karl Benz was the first to add a second gear to his machine and also invented the gear shift to transfer between the two. The suggestion for this additional gear came from Benzโ€™s wife, Bertha, who drove the three-wheeled Motorwagen 65 miles from Mannheim to Pforzheim – the first long distance automobile trip. The gears allow the engine to the maintained at its most efficient rpm while altering the relative speed of the drive shaft to the wheels. Gears originally required double clutching, where the clutch had to be depressed to disengage the first gear from the drive shaft, and then released to allow the correct rpm for the new gear to be selected. The clutch was then pressed again to engage the drives shaft with the new gear. Modern cars use synchronized which use friction to match the speeds of the new gear and he shaft before the teeth of the gears engage, meaning that the clutch only needs to be presses once.

โ€œOne thing I feel most passionately about: love of invention will never dieโ€ โ€“ Karl Benz

why Higgs Boson is called as the ‘God Particle’?

In 1964 peter Higgs with five scientists proposed a theory called the Higgs mechanism to explain the existence of mass in the universe. Before 1930s, atoms were considered as the fundamental particles. Then we found electron, protons and neutrons as atomic particles. Later we found that protons and neutrons are made up of even more small fundamental particles called quarks. Quarks are the fundamental building blocks for the whole universe. The key evidence for the existence of these elementary particles came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford linear accelerator center. They are commonly found in protons and neutrons. There are six types of quarks, up quark, down quark, top quark, bottom quark, strange quark, charm quark. They can have positive (+) or negative (-) electric charge. Up, charm and top quarks have a positive 2/3 charge. Down, strange, bottom quarks have a negative 1/3 charge. So protons are positive because there are two quarks (+2/3) ups and one down quark (-1/3), giving a net positive charge (+2/3+2/3-1/3 =1). These three quarks are known as valence quarks, but the proton could have an additional up quark and anti-up quark pair.

The Higgs field theory

In the second half of the 20th century, physicists made a developed a theory called a standard model of particle physics. They theorized about twelve fundamental particles that make up all matter, and four particles called bosons are responsible for three fundamental forces of nature. It includes strong force, weak force, and electromagnetism. Gravity is another force, it is not a part of this model but, it can be modeled using general relativity. With these fundamental particles in the standard model and gravity, we can build almost everything in the entire universe. However until 2012, the standard model was an underlying theory. Because all forces carrying particles should be massless. So, although the photons are massless, experiments show that the weak forces bosons have mass. So that was a promising model that could be used to explain our universe. But perhaps, it would need to be thrown out because it had the seemingly fatal flaw in being inconsistent regarding the way the weak force worked in the late 1950s physicists had no idea to resolve these issues all attempts to solve this problem. But indeed it created new theoretical problems. In 1964, Peter Higgs hypothesized that perhaps the force articles were massless but gained mass when they interacted with an energy field that is the reason for the existence of the entire universe.

During the very early moments following the big bang, in the universe, the elementary particles were massless and they were pure streams of energy that move at the speed of light. As the expansion of the universe was proceeding, density and temperature decreased below a certain key value. According to the theory, the Higgs field interacts with particles and can give them mass. It is theorized that different particles interact differently with the field, the particles that interact with it more intensely have greater mass and particles that donโ€™t interact with it that much have lower mass. Just imagine Higgs field as water, pointed shape objects interact lesser with water and cube shaped objects interact more with it. Some particles donโ€™t interact with the field like photons are massless. A fundamental part of the theory was the presence of a specific particle; itโ€™s called the Higgs boson. A boson that would allow the Higgs mechanism to unfold correctly to give mass to all other particles.

The Higgs Boson – CMS experiment

CERNโ€™s discovery of a new particle

Even though Higgs theorized it, scientists canโ€™t able to prove that until 2012. The particle accelerators had to possess a huge amount of energy to detect them. Finally, the Large Hadron Collider (LHC), the CERNโ€™s particle accelerator has been turned on in 2008 and managed to recreate the required energy and temperature conditions in 2012. The Higgs boson was finally experimentally detected and on 4th July, a conference held in the CERN auditorium announced the discovery of a particle compatible with the Higgs boson. The machine accelerates Hadron bundles at close to the speed of light and collides them each other in opposite directions. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destruction being witnessed by super-sensitive instruments. Even if LHC is the worldโ€™s largest particle accelerator, it had to work hard to detect Higgs boson. If the Higgs field doesnโ€™t exist, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light. For This reason Higgs boson is often called as the โ€˜God particleโ€™.

“I never expected this to happen in my lifetime and shall be asking my family to put champagne in the fridge.”Peter Higgs

How waves differ from tides? why do they occur?

A wave begins as the wind ruffles the surface of the ocean. When the ocean is calm and glass like, even the mildest breeze forms ripples, the smallest type of wave. Ripples provide surfaces for wind to act on, which produces larger waves. Stronger winds push the nascent waves into steeper and higher hills of water. The size a wave reaches depends on the speed and strength of the wind. The length of time it takes for the wave to form, and the distance over which it blows in the open ocean is known as the fetch. A long fetch accompanied by strong and study winds can produce enormous waves. The highest point of a wave is called the crest and the lowest point the trough. The distance from one crest to another is known as the wavelength.

Although water appears to move forward with the waves, for the most part water particles travel in circles within the waves. The visible movement is the waveโ€™s form and energy moving through the water, courtesy of energy provided by the wind. Wave speed also varies; on average waves travel about 20 to 50 Mph. Ocean waves vary greatly in height from crest to trough, averaging 5 to 10 feet. Storm waves may tower 50 to 70 feet or more. The biggest wave that was ever recorded by humans was in Lituya bay on July 9th, 1958. Lituya bay sits on the southeast side of Alaska. A massive earthquake during the time would trigger a mega tsunami and the tallest tsunami in modern times. As a wave enters shallow water and nears the shore, itโ€™s up and down movement is disrupted and it slows down. The crest grows higher and be gins to surge ahead of  the rest of the wave, eventually toppling over and breaking apart. The energy released by a breaking wave can be explosive. Breakers can wear down rocky coast and also build up sandy beaches.

At Nazare ,the Brazilian Rodrigo Koxa holds the record for the biggest wave ever surfed (80ft).

Why does a tide occur?

Tides are the regular daily rise and fall of ocean waters. Twice each day in most locations, water rises up over the shore until it reaches its highest level, or high tide. In between, the water recedes from the shore until it reaches its lowest level, or low tide. Tides respond to the gravitational pull of the moon and sun. Gravitational pull has little effect on the solid and inflexible land, but the fluid oceans react strongly. Because the moon is closer, its pull is greater, making it the dominant force in tide formation.

Gravitational pull is greatest on the side of earth facing the moon and weakest on the side opposite to the moon. Nonetheless, the difference in these forces, in combination with earthโ€™s rotation and other factors, allows the oceans to bulge outward on each side, creating high tides. The sides of earth that are not in alignment with the moon experience low tides at this time. Tides follow different patterns, depending on the shape of the seacoast and the ocean floor.  In Nova Scotia, water at high tide can rise more than 50 feet higher than the low tide level. They tend to roll in gently on wide, open beaches in confined spaces, such as a narrow inlet or bay, the water may rise to very high levels at high tide.

There are typically two spring tides and two narrow tides each month. Spring tide of great range than the mean range, the water level rises and falls to the greatest extend from the mean tide level. Spring tides occur about every two weeks, when the moon is full or new. Tides are at their maximum when the moon and the sun are in the same place as the earth. In a semi-diurnal cycle the high and low tides occur around 6 hours and 12.5 minutes apart. The same tidal forces that cause tides in the oceans affect the solid earth causing it to change shape by a few inches.

How optics changed the world?

The formal study of light began as an effort to explain vision. Early Greek thinkers associated with a ray emitted from the human eye. A surviving work from Euclid, the Greek geometrician, laid out basic concepts of perspective, using straight lines to show why objects at a distance appear shorter or slower than they actually are. Eleventh-century Islamic scholar Abu Ali al Hasan Ibn Al-Haytham known also by the Latinized name Alhazen revisited the work done by Euclid and Ptolemy and advanced the study of reflection, refraction, and color. He argued that light moves out in all directions from illuminated objects and that vision results when light enters the eye. In the late 16th and 17th centuries, researches including Dutch mathematician Willebrord Snel noticed that light bent as it passed through a lens or fluid. Although he believed the speed of light to be infinite, Danish astronomer Ole Romar in 1676 used telescopic observations of Jupiter moons to estimate the speed of light as 140,000 miles a second. Around the same time, Sir Isaac Newton used prisms to demonstrate that white light could be separated into a spectrum of basics colors. He believed that light was made of particles, where as Dutch mathematician Christiaan Huygens described light as a wave.

The particle versus the wave debate advanced in the 1800s. English physician Thomas youngโ€™s experiments with vision suggested wavelike behavior, since sources of light seemed to cancel out or reinforce each other. Scottish physicist James Clerk Maxwell’s research united the forces of electromagnetism fell along a single spectrum. Te arrival of quantum physics in late 19th and early 20th century prompted the next leap in understanding light. By studying the emission of electrons from a grid hit by a beam of light known as the photoelectric effect Albert Einstein concluded that light came from what he called photons, emitted as electrons changed their orbit around an atomic nucleus and then jumped back to their original state. Through Einsteinโ€™s finding seemed to favor the particle theory of light, further experiments showed that light and matter itself behave both as waves and as particles.

How do lasers works?

Einsteinโ€™s work on the photoelectric effect led to the laser, an acronym for โ€œlight amplification by stimulated emission radiation.โ€ As electrons are exited from one quantum state to another, they emit a single photon when jumping back. But Einstein predicted that when an already excited atom was hit with the right type of stimulus, it would give off two identical photons. Subsequent experiments showed that certain source materials, such as ruby, not only did that but also emitted photons that were perfectly coherent-not scattered like the emissions of a flashlight, but all of the same wavelength and amplitude. These powerfully focused beams are now common-place, found in grocery store scanners, handheld pointers, and cutting instruments from the hospital operating room to the shop floors of heavy industry.

Future trends in fiber optics communication

Fiber optics communication is definitely the future of data communication. The evolution of fiber optic communication has been driven by advancement in technology and increased demand for fiber optic communication. It is expected to continue into the future, with the development of new and more advanced communication technology.

Another future trend will be the extension of present semiconductor lasers to a wider variety of lasing wavelengths. Shorter wavelength lasers with very high input powers are of interest in some high density optical applications. Presently, laser sources which are spectral shaped through chirp managing to compensate for chromatic dispersion are available. Chirp managing means that the laser is controlled such that it undergoes a sudden change in its wavelength when firing a pulse, such that the chromatic dispersion experienced by the pulse is reduced. There is need to develop instruments to be used to characterize such lasers. Also, single mode tunable lasers are of great importance for future coherent optical systems. These tunable lasers laser in a single longitudinal mode that can be tuned to a range of different frequencies.

โ€œMusic is the arithmetic of sounds as optics is the geometry of light.โ€ โ€“ Claude Debussy

Large Hadron Collider-the world’s largest machine

The smallest thing that we can see with a light microscope is about 500 nano-meters. A typical atom is anywhere from 0.1 to 0.5 nano-meters in diameter. So we need an electron microscope to measure these atoms. The electron microscope was invented in 1931. Beams of electrons are focused on a sample. When they hit it, they are scattered, and this scattering is used to recreate an image. Then what about protons or neutrons? Or what about quarks? The quarks are the most fundamental building blocks of matter. So how did we find such small particles exist? The answer is a particle collider. A particle collider is a tool used to accelerate two beams of particles to collide since 1960s.

The largest machine built by man, the Large Hadron Collider (LHC) is a particle accelerator occupying an enormous circular tunnel of 27 kilometres in circumference, ranging from 165 to 575 feet below ground. It was situated near Genoa, Switzerland. It is so large that over the course of its circumference crosses the border between France and Switzerland. Thatโ€™s the giant collaboration going on between over 100 countries and 10,000 scientists. The tunnel itself was constructed between 1983 and 1988 to house another particle accelerator, the Large Hadron Collider, which operated until 2000, its replacement, the LHC, was approved in 1995, and was finally switched on in September 2008.

The Larger Hadron Collider (LHC) covers the circumference of 27 kilometres

Working of the Large Hadron Collider

 The LHC is the most powerful particle accelerator ever built and has designed to explore the limits of what physicists refer to as the standard Model, which deals with fundamental sub-atomic particles. There are two vacuum pipes are installed inside the tunnel which intersects in some places and 1,232 main magnets are connected to the pipe. For proper operation, the collider magnets need to be cooled to -271.3 ยฐC. To attain this temperature, 120 tons of liquid helium is poured into the LHC. These powerful magnets can accelerate protons near the speed of light, so they can complete a circuit in less than 90 millionths of a second. Two beams operate in opposite directions around the ring. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destruction being witnessed by super-sensitive instruments. But itโ€™s not that easy to do this experiment. Each beam consists of bunches of protons and most of the protons just miss each other and carry on around the ring and do they it again. Because, atoms are mostly empty space, so getting them to collide is incredibly difficult. It’s like colliding a needle into a needle, provided that the distance between them is 10 kilometres.

Collision of protons at near the speed of light

The aim of these collisions is to produce countless new particles that stimulate, on a micro scale, some of the conditions postulated in the Big Bang at the birth of the universe. Higgs Boson was discovered with the help of LHC. This so called โ€˜God Particleโ€™ that could be responsible for the very existence of mass. If it disappeared, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light (299,792,458 m/s). that means we can reach our moon in 1.3 seconds from earth.

โ€œWhen you look at a vacuum in a quantum theory of fields, it isnโ€™t exactly nothing.โ€ โ€“ Peter Higgs

How atomic clocks are so accurate?

Most types of clocks rely on the oscillation of a slid body, be it a pendulum, a balance-wheel, or a quartz crystal, but each suffers from the effects of temperature, pressure, and gravity. Time measuring devices depended on the spin of the earth, but these suffer from seasonal effects and tidal friction. The moon causes tides to occur on earth and it causes friction between moon and the earth. This friction slows down the earthโ€™s rotation by few milliseconds. This is called tidal friction. The atoms, however, vibrate a fixed number of times per second. Both the U.S. National Bureau of Standards and the United Kingdomโ€™s National Physics Laboratory tried to take advantage of these vibrations.

In 1949 the Americans built a quartz clock that was synchronized by the 24-GHz vibrations of low pressure gaseous ammonium molecules. The British, under the leadership of physicist Louis Essen (1908-1997), used the oscillations of an electrical circuit synchronized to the vibrations of caesium atoms, the first caesium was kept in a tunable microwave cavity and the clock relied on the fact that were 9,192,631,770 transitions between two hyperfine ground state energy levels every second. This number defined the second, as opposed to the old definition of there being 86,400 seconds in one day. A good atomic clock was accurate to one part in 1,014, and therefore would take about 3 million years to lose or gain a second.

JILA’s 3-D Quantum Gas Atomic clock

Four atomic clocks are used in each of the many satellites of the global positioning system and comparisons of electromagnetic-wave travel times enable positions of earth to be measured very precisely. The clocks are also used by geophysicists to monitor variations in the spin rate of earth, and the drifting of the continents. Since record began, earth recorded the shortest day on July 19, 2020, when the day was 1.4602 milliseconds shorter than 24 hours.

Why atomic clocks is used in GPS?

The Global Positioning System (GPS) consists of 24 satellites orbiting the earth. A GPS receiver uses the position of four of these satellites to locate itself. One to correct the time on the receiver, and three to locate its position. A signal is sent to the receiver from the first satellite that contains the satellites location and the signalโ€™s time of departure. The receiver then multiplies the signalโ€™s travel time by the speed of light to calculate its distance from the satellite. With one satellite the receiver knows that itโ€™s located o a sphere around that satellite with a radius equal to the calculated distance. So, it does the same calculation with a second satellite. The intersection of these two spheres narrows the location to the circumference of a circle. Then with a third satellite, the receiver can reduce the location to a single point. Since signals are travelling at the speed of light, being off by even a millisecond means an error off about a million feet, or 300 kilometres. But with atomic accuracy, the receiver can locate itself to about 3 feet. Global Positioning System (GPS) satellites fly in medium earth orbit (MEO- Medium Earth Orbit) at an altitude of approximately 20,200 kilometres from ground.

The NIST-F1 is one of the most accurate time standards based on microwave atomic clocks. The most accurate atomic clocks lose about a second over 138 million years.

“Time isn’t the main thing. It’s the only thing.” – Miles Davis

What is the smallest particle in the universe?

In the early models of the atom were simple, with protons and neutrons forming a nucleus and negatively charged electrons orbiting it, it seemed like a tiny solar system. In the early 1930s, however, analysis of cosmic rays and experiments with particle acceleration showed the existence of new particles by the dozen. In the early of 1960s American physicist Murray Gell-Mann and George Zweig independently conjectured that protons and neutrons were made of even more fundamental particles. They named the subatomic particles as Quark in 1964. The word quark came from James Joyceโ€™s novel โ€œFinneganโ€™s Wakeโ€ in which it is a nonsense word made by Joyce.  He key evidence for their existence came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford linear accelerator center. Other theoretical and experimental advances of the 1970s confirmed this discovery, leading to the standard model of elementary particle physics currently in force.

Properties of Quarks

Quarks are most commonly found inside protons and neutrons. They have many properties including mass, electric, charge, and color. There are six types of quarks, up quark, down quark, top quark, bottom quark, strange quark, charm quark. They can have positive (+) or negative (-) electric charge. Up, charm and top quarks have a positive 2/3 charge. Down, strange, bottom quarks have a negative 1/3 charge. So protons are positive because there are two quarks (+2/3) ups and one down quark (-1/3), giving a net positive charge (+2/3+2/3-1/3 =1). These three quarks are known as valence quarks, but the proton could have an additional up quark and anti-up quark pair.

 An anti-quark is the anti-particle of a quark and it could have other types of quarks. It includes pairs of strange quarks and anti-strange quarks, charm quarks, and anti-charm quarks. In fact, the proton has tons of quarks, anti-quarks pairs. The quarks are held together by the strong force which is carried by particles called gluons. So inside the proton, there are zillions of gluons and quarks all moving around close to the speed of light. The quarks that comprise a proton only make of 1% of the mass of that proton. A neutron consist two down quarks and one up quark which gave it an overall charge of 0. The quarks have a property called color change. It includes three color, red, blue, green and each of them is complemented with an anti-color. When we mix these three colors, we get white, thatโ€™s why proton is called colorless. The quarks change their colors constantly but, In order to maintain colorless state, the ant-color mix into it.The interaction between quarks and gluons is responsible for almost all the perceived mass of protons and neutrons and is therefore where we get our mass.

The Large Hadron Collider (LHC)- the world’s largest particle accelerator (27 kilometres).

Conclusion

The discovery of quarks was a gradual process that took over a decade for the entire sequence of events to unfold. A variety of theoretical insights and experimental results contributed to this discovery, but the MIT-SLAC deep inelastic electron scattering experiments plays a vital role. The existence of quarks is recognized today as a cornerstone of the standard model. I numerous experiments at CERN including those at the Large Hadron Collider (LHC), physicists are measuring the properties of Gell-Mann and Zweigโ€™s particles with ever-greater precision.

                  โ€œThree quarks for muster mark!โ€ โ€“ Author James Joyce

Stem cell therapy – the future of modern medicine.

Our bodies contain many specialized cells that carry out specific functions. These specialized cells are called differentiated cells. Stem cells are cells with the potential to develop into many different types of cells in the body. They act as a repair system for the body. They are unspecialized cells, so they cannot do specific functions in the body. It can create the potential for the cells to be used to grow replacement tissues. American development biologist James Thomson (1958), from the University of Wisconsin School of medicine, won the race to isolate and human embryonic stem cells. On November 6 1998, the โ€˜journal scienceโ€™ published the results of Thompsonโ€™s research. It described how he used embryos from fro fertility clinics which were donated by couples who no longer needed them, and developed ways to extract stem cells and keep them reproducing indefinitely.

With the ability to develop into any one of the 220 cell types in the body, stem cells hold great promise for treating a host of debilitating illness, including diabetes, leukemia, Parkinsonโ€™s disease, heart disease, and spinal cord injury. They also provide scientists with models of human disease and a new ways of testing drugs more effectively in living organisms. But for all the hopes invested, progress has been slow. It has helped that stem cell research has been steeped in controversy, with different groups questing the ethics of harvesting stem cells from human embryos.

Stem cell therapy – Delhi, India

In 2007 Thomson and Shinya Yamanaka, from Kyoto university, Japan, both independently found a way to turn ordinary human skin cells into stem cells. Both groups used four genes to reprogram human skin cells. Their work is being heralded as an opportunity to overcome problems including the shortage of human embryonic stem cells and restrictions on U.S. federal funding for research.

How stem cell therapy works?

Researches grow stem cells in lab. These developed stem cells are manipulated to specialize into specific types of cells, such as heart muscle cells, blood cells or nerve cells. These manipulated specialized cells can be implanted into the heart muscle. The healthy implanted heart muscle could then contribute to repairing defective heart muscle. The first stem cell therapy was a bone marrow transplant performed by French oncologist Georges Mathew in 1958 on five workers at the Vinca nuclear institute in Yugoslavia who had been affected by a criticality accident.

Stem cell therapies have become very popular in recent years, as people are seeking the latest alternative treatments for their many conditions. Stem cell therapies are very expensive to pursue. Even simple joint injections can cost $1,000 and more advancement treatments can rise in cost up to $100,000 depending on the condition. Patients must do their research and ask as many questions as they can before financially committing to treatment. Since it is a life changing treatment, it will effectively cost high.

Future stem cell treatments

 The stem cell treatment can helps us curing various diseases in the future. But it is important not to overhype the potential of stem cells and to accurately communicate findings to the public. We must not allow the misleading of some people says that we can cure the untreatable diseases with stem cell treatments. However with more research and investment, I believe that stem cell therapy could transform disease outcomes of many patients.

โ€œThe regenerative medicine revolution is upon us. Like iron and steel to the industrial revolution, like the microchip to the tech revolution, stem cells will be the driving force of this next revolution.โ€   -Cade Hildreth

Discovery and working of an MRI

      The MRI (Magnetic resonance imaging) scan is a medical imaging procedure that uses a magnetic field and radio waves to take pictures of our bodyโ€™s interior. It is mainly used to investigate or diagnose the conditions that affect soft tissue such as tumors or brain disorders. The MRI scanner is a complicated piece of equipment that is expensive to use and found only in specialized centers. Although Raymond Vahan Damadian (1936) is credited with the idea of turning nuclear magnetic resonance to look inside the human body, it was Paul Lauterbur (1929-2007) and Peter Mansfield (1933) who carried out the work most strongly linked to Magnetic resonance imaging (MRI) technology. The technique makes use of hydrogen atoms resonating when bombarded with magnetic energy. MRI provides three dimensional images without harmful radiation and offers more detail than older techniques.

       While training as a doctor in New York, Damadian started investigating living cells with a nuclear magnetic resonance machine. In 1971 he found that the signals carried on for longer with cells from tumors than from healthy ones. But the methods used at this time were neither effective nor practical although Damadian received a patent for such a machine to be used by doctors to pick up cancer cells in 1974.

The first full body MRI scanner at the University of Aberdeen in Scotland (1970)

       The real shift came when Lauterbur, a U.S, chemist, introduced gradients to the magnetic field so that the origin of radio waves from the nuclei of the scanned object could be worked out. Through this he created the first MRI images in two and here dimensions. Mansfield, a physicist from England, came up with a mathematical technique that would speed up scanning and make clearer images. Damadian went on to build the full body MRI machine in 1977 and he produced the first full MRI scan of the heart, lungs, and chest wall of his skinny graduate student, Larry Minkoff – although in a very different way to modern imaging.

Working of an MRI machine

        The key components of an MRI machine are magnet, radio waves, gradient, and a super advanced computer. We all know that human bodies are made up of 60% water, and water is magnetic. Each of the billons of water molecules inside us consists of an oxygen atom bonded to two hydrogen atoms that are called as H2O. Small parts of the hydrogen atoms act as tiny magnets and are very sensitive to magnetic fields. The first step in taking an MRI scan is to use a big magnet to produce a unified magnetic field around the patient. The gradient adjusts the magnetic field into smaller sections of different magnetic strengths to isolate our body parts. Take brain as an example, normally the water molecules inside us are arranged randomly. But when we lie inside the magnetic field, most of our water molecules move at the same rhythm or frequency as the magnetic field. The ones that donโ€™t move along the magnetic field are called low energy water molecules. To create an image of a body part, the machine focuses on the low energy molecules. The radio waves move at the same rhythm or frequency as the magnetic fields in an MRI machine.

       By sending radio waves that match or resonate with the magnetic field, the low energy water molecules absorb the energy they need to move alongside the magnetic field. When the machine stops emitting radio waves, the water molecules that had just moved along the magnetic field release the energy they had absorbed and go back to their position. This movement is detected by the MRI machine and the signal is sent to a powerful computer which uses imaging software to translate the information into an image of the body. By taking images of the body in each section of the magnetic field the machine produces a final three dimensional image of the organ which doctors can analyze to make a diagnosis.

โ€œMedicine is a science of uncertainty and an art of probabilityโ€. โ€“William Osler

Should more money be spent on space exploration?

Poverty still rising all over the world, COVID-19 pandemic made it even worse. About 1.89 billion people, or nearly 36% of the worldโ€™s population, lived in extreme poverty. Nearly half the population in developing countries lived on less than $1.25 a day. Why should we spend money on space exploration when we already have so many problems here on Earth? Is it really that important? Itโ€™s like What if our ancestors thought that it would be a waste of time to figure out agriculture while we can do hunting? Or why should we spend so much time on exploring new lands while we have so many problems in our land? Each year, space exploration contributes to a lot of innovations on earth. It gave answers to many fundamental questions about our existence, and a lot of questions there to be answered if only we could increase our investment on space exploration. NASA’s annual budget is 23 billion dollars but, its only 0.1% of the total revenue. even if we were to increase the international budget 20 times it would only be a small fraction of GDP. isn’t our future worth a quarter of a percent?

“That’s one small step for man, one giant leap for mankind”.

Benefits of space exploration:

    Improves our day to day life

       Since 1969, Neil Armstrong became the first human to ever set foot on moon, our interest in science and technology has improved a lot. In 22nd February 1978, US space agency launched the first satellite for its program of global positioning system (GPS). Currently there are 31 global positioning system (GPS) satellites orbiting the earth.Space exploration helped us to create many inventions like television, camera phones, internet, laptops, LEDโ€™s, wireless gadgets, purifying system of water and many more that we are using in our day to day life. There are nearly 3,372 active satellites providing information on navigation, business & finance, weather, climate and environmental monitoring, communication and safety.

   Improving health care

ย ย ย ย ย  ย The international space station plays a vital role in health and medical advancements. The Astronauts who works on the ISS able to do experiments that arenโ€™t possible on earth due to the difference in the gravity. The project of Exomedicine – the study of medicine and micro-gravity, gravity has an effect on a molecular level so working in an environment where it can be eliminated from the equation allows discoveries that would otherwise be impossible. Medical advancements due to space exploration include,

  • Diagnosis, treatment, and prevention of cardiovascular diseases
  • Treatment of chronic metabolic disorders
  • Better understanding of osteoporosis
  • Improvements in Breast cancer detection
  • Programmable pacemakers
  • Laser angioplasty
  • NASAโ€™s device with Space technology for Asthma
  • ISS plays vital role in vaccine development
  • Early detection of immune changes prevents shingles
  • Development of MRI s and CT or CAT Scans
  • And invention of ear thermometers
Proxima Centauri b is an exoplanet orbiting the red dwarf star Proxima Centauri

Need for space colonization

       Overpopulation is one of the major crises in our planet. Currently we have 7.8 billion people alive on earth. Experts predict that there will be 9.7 billion people by 2050 and 11 billion by 2100, our earth can carry only 9 billion to 12 billion people with the limited food and freshwater resources. That means we have to find an exoplanet with suitable conditions soon. We already went to moon 6 times, we already sent a rover to Mars. Robotic missions are cost efficient, but if one is considering the future of human race we have to go there ourselves. Elon Musk announced that SpaceX is going to send people to Mars I 2022. NASA planned to make a colony on Mars by 2030. These missions are not something we need at this moment. But it may play an important role on our future. Proxima Centauri b is an exoplanet which is 4.24 light years away from us. With our current technology, it is impossible to reach it in our lifetime. But we should make it as an aim for interstellar travel over the next 200 to 500 years. Stephen hawking said that the human race has existed as a separate species for about 2 million years. Civilization began about 10,000 years ago, and the rate of development has been steadily increasing. If the human race is to continue for another million years, we will have to boldly go where no one has gone before.

The day we stop exploring is the day we commit ourselves to live in a stagnant world, devoid of curiosity, empty of dreams. โ€“Neil deGrasse Tyson

THE USE OF MOBILE PHONES.

The invention of mobile phones is one of the greatest achievement of humankind. The first mobile phone service was started in Japan in 1979 and within 42 years it has revolutionized the lives of people. Mobile phones usage is growing at a rapid rate, today the world is unimaginable without mobile phones.

Mobile phones is one of the quickest means of communication device. A person can communicate with their friends or family who lives miles apart. It has eradicated the olden system of letter delivery. In morden generation assess to internet has acted as miracle to many tech company to accumulate wealth for their companies. We can take phones in our pocket and roam anywhere. By use of mobile phones life has become more comfortable also phones now do the work of laptop’s; we can do NetBanking, send our important documents to the required place through email and many more.

Parent’s encourage their children to use mobile phones as it promotes learning. People can assess to google or youtube for educational purposes. In the current pandemic situation the world has shifted to virtual meating and imparting knowledge as socializing is prohibited in peron/groups. Parents also encourage their children to use mobile phones as it has GPS system so that they are assured of their child safety when they venture out in the world alone. Phones also serves as an entertainment purposes. People can assess to different apps for movies and shows to keep themselves busy. People can save their money due to introduction of eBooks. Many important date can be stored in our phone which out mind can’t remember. It’s helps a person to captured their sweetest memory for them to cherish.

Mobile phones has connected the world digitally through internet. People can know what is happening around the world through news. The current scenario has changed a lot now the world is leading itself to globalization. Branding and advertising are done through online platform it’s has made growth in many different industries. Different countries set their own cyber security so that there is no breach of people’s privacy. Many tech company hires professional hackers for their companies digital data so that no one can hack into their companies system.

“Mobile phones are neither good or bad it depends on the individual how they use “.

INSTAGRAM

Instagram usually abbreviated to IG, Insta or The gram is an American photo and video sharing social networking service created by Kevin Systrom and Mike Krieger. In April 2012, Facebook acquired the service for approximately US$1 billion in cash and stock. The app allows users to upload photos and videos that can be edited with filters and organized by hashtags and location tagging. Posts can be shared publicly or with your followers. Users can browse other users’ content by searching tags and locations and view trending content. Users can like and comment on other users’ photos and follow their knowns to add their content to a personal feed. It was released in October 6, 2010; 10 years ago. It is available in 32 languages and its website is https://www.instagram.com/. Later on, the service also added messaging features, the ability to include multiple images or videos in a single post, and a ‘stories’ feature which allows users to post their daily works and videos for 24 hours. The ‘Reels’ section is now also added which allows the users to post a video of them or anything within a limit of 15 to 30 seconds using any type of audio. The craze of making reels is so high among the people, they are enjoying and celebrating with Instagram Reels. Instagram became the 4th most downloaded mobile app of the 2010s.

Ongoing research continues to explore how media content on the platform affects user engagement. Past research has found that media which show peoples’ faces receive more likes and comments and that using filters that increase warmth, exposure, and contrast also boosts engagement. Users are more likely to engage with images that depict fewer individuals compared to groups and also are more likely to engage with content that has not been watermarked, as they view this content as less original. The motives for using Instagram among young people are mainly to look at posts, particularly for the sake of social interactions and recreation.

In popular culture:

  • Instagram is also used as a documentary film about three teenagers growing up on Instagram as social animals.
  • Instagram model is a term for models who gain their success as a result of the large number of followers they have on Instagram.
  • Instapoetry is defined as a style of poetry which formed by sharing images of short poems by poets on Instagram.
  • Instagram Pier is a cargo working area in Hong Kong that gained its nickname due to its popularity on Instagram.

https://www.npr.org/2017/06/07/493923357/instagram-kevin-systrom-mike-krieger

https://en.wikipedia.org/wiki/List_of_most-followed_Instagram_accounts

https://www.indiatimes.com/technology/news/instagram-rolls-out-new-anti-abuse-features-including-limits-details-inside-547034.html

PERKS OF USING INSTAGRAM-

  • Instagram gives you the opportunity to learn more about your audience and reach them very easily.
  • With more than 25 million businesses actively using Instagram to market to their target audience, it’s easy to see why so many people use the app to shop. In today’s instant access retail world, shoppers want visual content to help them make buying decisions.
  • It is a new method of sharing the pictures online via social media sites involving various creatives ideas like adding filters, making reels and many more to make the post or story look more attractive.
  • The positive effects of Instagram include self-expression, self-identity, community building and emotional support.
  • We can also make money by using Instagram like by creating sponsored posts for brands as an influencer. Become an affiliate and endorse different products. Be a virtual assistant to an influencer.
  • Your brand probably has a stable group of fans who follow your work, consume Instagram content, and like, share and comment on your posts.
  • It’s easy to share Instagram content across other channels.
  • Instagram is free advertising. With social media, including Instagram there are no costs involved unless you decide to pour money into it.

It’s not just about capturing the moment, Instagram is also a social network,” Eyal says. Instagram is so popular because it has the ability to anticipate users’ needs, prioritize their trust, keep abreast of competition, and leverage current trends. Instagram is more than just a way to get jealous about your friend’s vacation photos. Instagram is a way to express yourself and enhance your knowledge and various skills. It is a fun place to wander around in your phone anytime. You can post anything or can add to your story with beautiful filters never seen and used before. It is a very safe place for sure as it do not reveals your identity unless you follow another person. You can enjoy it by watching various types of reels, various funny reels are also there and. You can also make reels by choosing your choice of audio. So it’s not wrong to say that Instagram is very popular and it has become the way of living.

ARE COMPUTERS REALLY INTELLIGENT?

When it comes to the possibilities and possible perils of artificial intelligence (AI), learning and reasoning by machines without the intervention of humans, there are lots of opinions out there. Only time will tell which one of these quotes will be the closest to our future reality. Until we get there, itโ€™s interesting to contemplate who might be the one who predicts our reality the best.

โ€œThe development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.โ€โ€” Stephen Hawking

Will computers eventually be smarter than humans? 
 
Everyone is talking about artificial intelligence (AI) โ€“ in the media, at conferences and in product brochures. Yet the technology is still in its infancy. Applications that would have been dismissed as science fiction not long ago could become reality within a few years. With its specialty materials, the Electronics business sector of Merck is contributing to the development of AI. 

HOW SMART ARE YOU?

Whoโ€™s smarter โ€” you, or the computer or mobile device on which youโ€™re reading this article? The answer is increasingly complex, and depends on definitions in flux. Computers are certainly more adept at solving quandaries that benefit from their unique skill set, but humans hold the edge on tasks that machines simply canโ€™t perform. Not yet, anyway.

Computers can take in and process certain kinds of information much faster than we can. They can swirl that data around in their โ€œbrains,โ€ made of processors, and perform calculations to conjure multiple scenarios at superhuman speeds. For example, the best chess-trained computers can at this point strategize many moves ahead, problem-solving far more deftly than can the best chess-playing humans. Computers learn much more quickly, too, narrowing complex choices to the most optimal ones. Yes, humans also learn from mistakes, but when it comes to tackling the kinds of puzzles computers excel at, weโ€™re far more fallible.

Computers enjoy other advantages over people. They have better memories, so they can be fed a large amount of information, and can tap into all of it almost instantaneously. Computers donโ€™t require sleep the way humans do, so they can calculate, analyze and perform tasks tirelessly and round the clock. On the other hand, humans are still superior to computers in many ways. We perform tasks, make decisions, and solve problems based not just on our intelligence but on our massively parallel processing wetware โ€” in abstract, what we like to call our instincts, our common sense, and perhaps most importantly, our life experiences. Computers can be programmed with vast libraries of information, but they canโ€™t experience life the way we do.

Some of thatโ€™s rethinking how we approach these questions. Rather than obsessing over whoโ€™s smarter or irrationally fearing the technology, we need to remember that computers and machines are designed to improve our lives, just as IBMโ€™s Watson computer is helping us in the fight against deadly diseases. The trick, as computers become better and better at these and any number of other tasks, is ensuring that โ€œhelping usโ€ remains their prime directive.

The important thing to keep in mind is that it is not man versus machine. โ€œIt is not a competition. It is a collaboration.”

Disadvantages of Modern Technology

In recent decades, digital technology has altered practically every area of people’s lives. Workplaces, shopping, music, movies, television, photography, travel, transportation, and long-distance communications are just a few examples of how things have changed. In fact, it’s becoming increasingly difficult to locate an electrical item or huge piece of machinery that doesn’t use digital technology in some form.

Because of digital technology, electronics have grown significantly smaller, lighter, faster, and more adaptable than they were previously. It also means that massive amounts of data may be kept locally or remotely and transferred from one location to another extremely instantly. Instead of just letters and numbers, the term “information” has evolved to include photographs, audio, video, and other forms of media. Information may also be changed considerably more simply; for example, photographs, music, and movies can all be edited.

In this article, letโ€™s see the disadvantages of Modern Technology.

Demerits of Technology

1. Loneliness and social isolation

Because of computer and smartphone technology, social isolation is on the rise. Teens and young people are spending more time on social media, surfing the Internet, and playing video games, ignoring their real lives. Social media was created to assist us in making new acquaintances and conversing with them. However, the conversations that take place only on the screen of a smartphone or computer cause people to feel uncomfortable about real-life acquaintances. Even some people grow less sensitive to others as a result of their discomfort in interactions. Our previous style of engaging and meeting with people has been displaced by technology.

2. Society has become reliant on one another

Technology is becoming increasingly important in modern civilizations. Many critical services, including hospitals, electricity grids, airports, rail and road transportation networks, and military defenses, are now vulnerable to cyber-assault or catastrophic collapse. Humans would be rendered practically defenseless if technology were to be taken away from them overnight. We’ve given up on producing things with our hands and learning to survive off the earth.

3. Technology is a source of environmental issues

Technology causes a slew of environmental issues. Aside from the fact that most equipment and devices are made of toxic or non-biodegradable materials, most technologies require a power source, which can result in increased electricity and fossil fuel use. Aside from power, some technology creates harmful compounds. Although farming technology allows for more affordable and diversified food options, the technology used to produce them, such as pesticides and chemical fertilizers, can be harmful to humans and the environment.

4. Cost

Maintaining current with the latest and greatest technology can be costly, even if it saves you money in the long run. Investing in used equipment, maintaining a half-step behind the current tech development cycles, and enabling your employees to use their personal devices can all help you save money here.

5. Disbursement of Time

We devote a significant amount of time to our convenient technology. For example, when we want to be entertained, we turn to our iPhones. Alternatively, you may play video games, take the elevator instead of walking, watch the news, videos, and images of your friends on Facebook, and participate in pointless online discussions. However, if you give up all devices for a few days, you’ll be surprised at how much time you save. Time saved can be put to good use by participating in sports and exercising, meditating, or spending time with loved ones.

Conclusion

None of these drawbacks imply that technology is inherently harmful or should be avoided. Rather, they show that technology isn’t a flawless or all-encompassing solution for improving workplace performance and culture. Work to understand both sides of technological integration and make allowances for the real flaws that technology can bring.

Advantages of Digital India

The Government of India began the Digital India initiative to ensure that residents can access government services electronically through enhanced online infrastructure and increased Internet connectivity, or by making the country digitally empowered in the field of technology. Rural communities will be connected to high-speed internet networks as part of the effort. The construction of a safe and robust digital infrastructure, the delivery of government services online, and universal digital literacy are the three main components of Digital India. BharatNet, Make in India, Startup India, and Standup India, industrial corridors, Bharatmala, Sagarmala, dedicated freight corridors, UDAN-RCS, and E-Kranti were all launched on July 1, 2015, by Indian Prime Minister Narendra Modi.

In this article, letโ€™s see the advantages of the Digital India Campaign.

Benefits of Digital India

  • Removal of the black economy all internet transactions can be easily monitored, and every payment made by a customer to any business will be logged, ensuring that no unlawful transactions occur and that people cannot hide their money. The government can effectively evict the underground economy by prohibiting cash-based transactions and requiring solely digital payments.
  • Payment without cash you won’t need to hold cash if you use cashless payment. Cash theft will be reduced, and when you make an online payment, all of your payments will be saved in the Record Bank. Customers can still get discounts on cashless payments and cashback through apps like Paytm, Phone Pe, and Freecharge. There are also shopping websites such as Flipkart and Amazon that offer discounts when making online bank payments.
  • Revenues Increasing another significant benefit of digital India is that as transactions become more digitized, monitoring sales and taxes becomes much easier. Because transactions are recorded, customers will now receive a bill for each purchase they make, and merchants will no longer be able to avoid paying tax to the government, resulting in increased government revenue โ€“ and thus growth of the economy.
  • The Economy has improved Digital India has made the most significant contribution to India’s Internet connectivity. The cashless economy is completely reliant on the Internet, allowing all types of information to be received solely through the Internet; the Digital India program will also considerably boost the country’s economy.
  • New jobs are being created there have been several ways to expand job opportunities in new markets as well as increase employment prospects in existing markets as a result of the Digital India program. New markets have begun hiring individuals, thereby raising the employment rate.
  • Establishes a foundation for e-governance. E-government is a great advantage for all citizens since it is easier, faster, and safer than traditional governance. With e-governance, you can now receive anything from a birth certificate to a death certificate in seconds, making it handy for individuals to get the information they need on the go.
  • People’s empowerment is a term used to describe the process of empowering people. One of the most significant benefits of Digital India is that it empowers citizens. When payments go digital, everyone will need a bank account, a phone, and other electronic devices. The government may quickly send subsidies to people’s Aadhaar-linked bank accounts in this way. In other words, people no longer have to wait for the government to provide them with the incentives and subsidies that they are entitled to. In most cities, this feature is already in place. The government provides LPG subsidies to the general public as an example of this feature. These days, the payment of the subsidy is done through bank transfers.

Role of science in making india

New green pasture beckons

In the last few years, science has helped a lot in the development of India. Science has contributed to all the sectors. Science has improved the global economy, increased employment opportunities, saved millions of lives and has played a major role in a lot of industries. Science is very important for the growth and development of India. It even plays a key role in our daily lives. Every country should invest as much as possible in research and development for scientific technologies. In this essay on the role of science in making India we will see how science has helped India to grow in different sectors.

How Indian Scientists have Helped India Grow?

When it comes to Indian Scientists, the first name comes to my mind is CV Raman. CV Raman was the first Asian who won the Nobel Prize. His work was related to light and sound. He investigated that when light passes through a transparent material, some of the deflected light waves see the change in its amplitude and wavelength.

APJ Abdul Kalam is the second name that comes in my mind in Indian Scientists. APJ Abdul Kalam worked as an Aerospace engineer with ISRO and DRDO. He was also president of India from 2002 to 2007. Abdul Kalam contributed a lot to Aerospace. One of the contributions is deploying Rohini Satellite near Earthโ€™s orbit. A few more names are Homi Bhabha, Visvesvaraya, V Radhakrishnan, Satyendra Nath Bose and many moreโ€ฆย 

Seven Defining S&T Contributions That Have Impacted Every Indian - The  India Saga

How has Science Increased Employment Opportunities?

Whenever any new technology is discovered it leads to new industries. For example, if any new scientific device is invented it will require eligible professionals to control the device. Such inventions help in increasing employment opportunities. This also helps in growth in many businesses which in turn develops the Indian economy.

Curing Diseases and Saving Lives

In the last few years, medical science has evolved so much and saved billions of lives. New technologies like wireless brain sensors, artificial organs, smart inhalers, robotic surgery, virtual reality are making work easier for thousands of doctors around the world. And also these technologies are saving millions of lives and curing diseases.ย 

Role in Agriculture Sector

Science has played a very major role in the Agriculture sector. Food is one of the basic needs of our lives. And science has now invented so many new agriculture techniques which have increased production drastically. The old mundane techniques farmers used to follow was very slow, expensive, and required too much effort.

National Technology Day 2021: Science and technology for a sustainable  future - The Financial Express

Science has made everything a lot easier for farmers. Improved facilities in irrigation, modern fertilizers, advanced equipment, and pesticides are all helping farmers to work faster, and save more money. 

Conclusion

Science has helped us a lot in many ways and it will keep helping. Everyone should not only invest as much as possible in science and technology but also should stay aware of all new technologies developed around the world.ย