Expressing oneself through art seems a universal human impulse, while the style of that expression is one of the distinguishing marks of a culture. As difficult as it to define, art typically involves a skilled, imaginative creator, whose creation is pleasing to the senses and often symbolically significant or useful. Art can be verbal, as in poetry, storytelling or literature or can take the form of music and dance. The oldest stories, passed down orally may be lost to us now, but thanks to writing, tales such as the epic of Gilgamesh or the Lliad entered the record and still hold meaning today. Visual art dates back 30,000 years, when Paleolithic humans decorated themselves with beads and shells. Then as now, skilled artisans often mixed aesthetic effect with symbolic meaning.
In an existence that centered on hunting, ancient Australians carved animal and bird tracks into their rocks. Early cave artists in Lascaux, France, painted or engraved more than 2,000 real and mythical animals. Ancient Africans created stirring masks, highly stylized depictions of animals and spirits that allow the wearer to embody the spiritual power of those beings. Even when creating tools or kitchen items, people seem unable to resist decorating or shaping them for beauty. Ancient hunters carved the ivory handles of their knives. Ming dynasty ceramists embellished plates with graceful dragons. Modern pueblo Indians incorporates traditional motifs in to their carved and painted pots. The western fine arts tradition values beauty and message. Once heavily influenced by Christianity and classical mythology, painting and sculptures has more recently moved toward personal expression and abstraction.
Humans have probably been molding clay- one of the most widely available materials in the world- since the earliest times. The era of ceramics began, however, only after the discovery of that very high heat renders clay hard enough to be impervious to water. As societies grew more complex and settled, the need for ways to store water, food, and other commodities increased. In Japan, the Jomon people were making ceramics as early as 11,000 B.C. by about the seventh millennium B.C.; kilns were in use in the Middle East and china, achieving temperatures above 1832°F. Mesopotamians were the first to develop true glazes, through the art of glazing arguably reached its highest expression in the celadon and three color glazes of the medieval china. In the new world, although potters never reached the heights of technology seen elsewhere, Moche, Maya, Aztec, and Puebloan artists created a diversity of expressive figurines and glazed vessels.
When Spanish nobleman Marcelino Sanz de Sautuola described the paintings he discovered in a cave in Altamira, contemporizes declared the whole thing a modern fraud. Subsequent finds confirmed the validity of his claims and proved that Paleolithic people were skilled artists. Early artists used stone tools to engrave shapes into walls. They used pigments from hematite, manganese dioxide, and evergreens to achieve red, yelled, brown, and black colors. Brushes were made from feathers, leaves, and animal hair. Artists also used blowpipes to spray paint around hands and stencils.
The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. The cerebrum, the largest part of the human brain, consists of two cerebral hemispheres.It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and protected by, the skull bones of the head.
There are three types of brain fore brain, midbrain,hindbrain.The hindbrain includes the upper part of the spinal cord, the brain stem, and a wrinkled ball of tissue called the cerebellum.Brains are made of soft tissue, which includes gray and white matter, containing the nerve cells, non-neuronal cells which help to maintain neurons and brain health, and small blood vessels. They have a high water content as well as a large amount nearly 60 percent of fat.
FUNCTION OF BRAIN:-
* Attention and concentration.
* Self-monitoring.
* Organization.
* Speaking (expressive language).
* planning and initiation.
* Awareness of abilities and limitations.
* Personality.
* Mental flexibility.
* Inhibition of behavior.
The human brain color physically appears to be white, black, and red-pinkish while it is alive and pulsating. The brain itself does not feel pain because there are no nociceptors located in brain tissue itself. This feature explains why neurosurgeons can operate on brain tissue without causing a patient discomfort, and, in some cases, can even perform surgery while the patient is awake.Brain controls vital functions such as breathing, swallowing, digestion, eye movement and heartbeat, there can be no life without it. But the rest of the brain is obviously capable of some remarkable feats, with one part able to compensate for deficiencies in another.
Subhash Chandra Bose SUBHASH CHANDRA BOSE Subhash Chandra Bose is fondly remembered as one of the greatest freedom fighters of India, and popularly known by the name of ‘Netaji’ (Respected Leader). He was strongly influenced by Swami Vivekananda’s teachings, and also believed that the Bhagavad Gita was a great source of inspiration for the struggle against the British. Bose was an Indian nationalist, and a prominent figure of the Indian independence movement. He was superior head for Indian National Army during World War II. He always pitched for complete and unconditional independence of India from the British Rule.
CHILDHOOD:
Subhash Chandra Bose was born to Prabhavati Devi and Janakinath Bose on January 23 in 1897 in Odisha. He took admission into the Protestant European School which was run by the Baptist Mission. He did B A in Philosophy from the Presidency College in Calcutta, and was later expelled for assaulting Professor for the latter’s anti-India remarks. After the incident, Bose was considered as one of the rebel-Indians.During his college days, he gradually developed nationalistic temperament, and became socially and politically aware.
POLITICAL LIFE:
18 Subhash Chandra Bose Books Which Prove Massively Impactful THE GREAT FREEDOM FIGHTER After a few years, Bose returned to India as he resigned from his civil service job in April 1921, and later joined the Indian National Congress to fight for the independence of India. Subhash Chandra Bose started the newspaper known as ‘Swaraj’, and took charge of publicity for the Bengal Provincial Congress Committee. In 1923, Bose was elected as the President of All India Youth Congress and as the Secretary of Bengal State Congress. He was also editor of the newspaper called ‘Forward’, founded by his mentor Chittaranjan Das, and he served as the CEO of the Calcutta Municipal Corporation. By December 1927, Bose was appointed as the General Secretary of the INC.
In November 1934, he wrote the first part of his book ‘The Indian Struggle’, which was about nationalism and India’s independence movement during 1920–1934, but the British government banned the book. By 1938, he agreed to accept nomination as the Congress President, and presided over the Haripur session. However, due to his strong differences with Mahatma Gandhi and Jawaharlal Nehru, he resigned in 1939.
ROLE IN INDIAN INDEPENDENCE:
S C Bose was always in favour of armed revolution in order to expel the Britishers from India. During the time when the Second World War took place, Bose form the Indian National Army (INA) with the help of the Imperial Japanese Army, and also founded an Indian Radio Station called ‘Azad Hind Radio’.
A few years later, he travelled to Japan, where more soldiers and civilians joined the INA. Even when faced with military reverses, Bose was able to maintain support for the Azad Hind movement. In Europe, S C Bose sought help from Adolf Hitler and Benito Mussolini for the liberation of India. Bose had struck an alliance with Japan and Germany as he felt that his presence in the East would help India in the freedom struggle against the British.
MEMORIAL
Bose was featured on the stamps in India from 1964, 1993, 1997, 2001, 2016 and 2018.Bose was also featured in ₹2 coin in 1996 and 1997, ₹ 75 coin in 2018and ₹125 coin in 2021. Netaji Subhash Chandra Bose international Airport at Kolkata, Netaji Subhash Chandra Bose Island, formerly Ross Island and many other institutions in India are named after him. On 23 August 2007,Japanese Prime minister visited the Subhas Chandra Bose memorial hall in Kolkata. Abe said to Bose’s family “The Japanese are deeply moved by Bose’s strong will to have led the Indian Independence Movement from British rule. Netaji is a much respected name in Japan.
In 2021, the Government of India declared 23 January as Parakram Divas to commemorate the birth anniversary of Subhas Chandra Bose. Political party, Trinamool Congress and the All India Forward Bloc demanded that the day should be observed as DESHPREM Divas.
*****MY HERO IS SUBHAS CHANDRA BOSE THIS ARTICLE IS DEDICATED TO YOU*****
“JAI HIND” this slogan is said by Subhas Chandra Bose .
When a massive star dies, it leaves a small but dense remnant core in its wake. If the mass of the core is more than 3 times the mass of the sun, the force of gravity overwhelms all other forces and a black hole is formed. Imagine the size of a star is 10 times more massive than our sun being squeezed into a sphere with a diameter equal to the size of New York City. The result is a celestial object whose gravitational field is so strong that nothing, not even light can escape it. The history of black holes was started with the father of all physics, Isaac Newton. In 1687, Newton gave the first description of gravity in his publication, Principia mathematica, that would change the world.
Then 100 years later, John Michelle proposed the idea that there could exist a structure that would be massive enough and not even light would be able to escape its gravitational pull. In 1796, the famous French scientist Pierre-Simon Laplace made an important prediction about the nature of black holes. He suggested that because even the speed of light was slower than the escape velocity of black hole, the massive objects would be invisible. In 1915, Albert Einstein changed physics forever by publishing his theory of general relativity. In this theory, he explained space time curvature and gave a mathematical description of a black hole. And in 1964, john wheeler gave these objects the name, the black hole.
The Gargantua in Interstellar is an incredibly close representation of an actual black hole In classical physics, the mass of a black hole cannot decrease; it can either stay the same or get larger, because nothing can escape a black hole. If mass and energy are added to a black hole, then its radius and surface area also should get bigger. For a black hole, the radius is called the Schwarzschild radius. The second law of thermodynamics states that, an entropy of a closed system is always increases or remains the same. In 1974, Stephen hawking– an English theoretical physicists and cosmologist, proposed a groundbreaking theory regarding a special kind of radiation, which later became known as hawking radiation. So hawking postulated an analogous theorem for black holes called the second law of black hole mechanics that in any natural process, the surface area of the event horizon of a black hole always increase, or remains constant. It never decreases. In thermodynamics, black bodies doesn’t transmit or reflect any radiation, it only absorbs radiation.
When Stephen hawking saw these ideas, he found the idea of shining black holes to be preposterous. But when he applied the laws of quantum mechanics to general relativity, he found the opposite to be true. He realized that stuff can come out near the event horizon. In 1974, he published a paper where outlined a mechanism for this shine. This is based on the Heisenberg uncertainty Principe. According to the principle of quantum mechanisms, for every particle throughout the universe, there exists an antiparticle. These particles always exist in pairs, and continually pop in and out of existence everywhere in the universe. Typically, these particles don’t last long because as soon as possible and its antiparticle pop into existence, they annihilate each other and cease to exist almost immediately after their creation.
In the event horizon that the point which nothing can escape its gravity. If a virtual particle pair blip into existence very close to the event horizon of a black hole, one of the particles could fall into the black hole while the other escapes. The one that falls into the black hole effectively has negative energy, which is, in Layman’s terms, akin to subtracting energy from the black hole, or taking mass away from the black hole. The other particle of the pair that escapes the black hole has positive energy, and is referred to as hawking radiation.
The first-ever image of a black hole by the Event Horizon Telescope (EHT), 2019 Due to the presence of hawking radiation, a black hole continues to loss mass and continues shrinking until the point where it loses all its mass and evaporates. It is not clearly established what an evaporating black hole would actually look like. The hawking radiation itself would contain highly energetic particles, antiparticles and gamma rays. Such radiation is invisible to the naked eye, so an evaporating black hole might not look like anything at all. It also possible that hawking radiation might power a hadronic fireball, which could degrade the radiation into gamma rays and particles of less extreme energy, which would make an evaporating black hoe visible. Scientists and cosmologists still don’t completely understand how quantum mechanics explains gravity, but hawking radiation continues to inspire research and provide clues into the nature of gravity and how it relates to other forces of nature.
Human rights day celebrated in 10th December.The National Human Rights Commission of India defines human rights as provided under the Protection of Human Rights Act (PHRA), 1993, as Rights Relating To Life, liberty, equality and dignity of the individual guaranteed by the constitution or embodied in the international covenants and enforceable by courts in India.Human Rights Day is observed on December 10 every year, since it was on this very day in 1948 that the United Nations General Assembly (UNGA) adopted the Universal Declaration of Human Rights, a milestone document that enshrines the rights and freedoms of all human beings.The international document also commits nations to recognise all humans as being “born free and equal in dignity and rights” regardless of “nationality, place of residence, gender, national or ethnic origin, colour, religion, language, or any other status
7 human rights in India:-
* Origins.
* Significance and characteristics.
* Right to equality.
* Right to freedom.
* Right against exploitation.
* Right to freedom of religion.
* Right to life.
* Cultural and educational rights.
An Act to provide for the constitution of a National Human Rights Commission, State Human Rights Commissions in States and Human Rights Courts for better protection of human rights and for matters connected therewith or incidental thereto. Ministry: Ministry of Home Affairs. Department: Department of States.Protection of human rights is essential for the development of the people of the country, which ultimately leads to development of the national as a whole. The Constitution of India guarantees basic human rights to each and every citizen of the country.The Constitution of India guarantees to all Indian women equality (Article 14), no discrimination by the State (Article 15(1)), equality of opportunity (Article 16), equal pay for equal work (Article 39(d)) and Article 42.
Human rights are important because no one should be abused or discriminated against, and because everyone should have the chance to develop their talents. Unfortunately, many people around the world don’t have these basic rights and freedoms.It is constitutional mandate of judiciary to protect human rights of the citizens. Supreme Court and High Courts are empowered to take action to enforce these rights. Machinery for redress is provided under Articles 32 and 226 of the constitution.
The most significant human rights issues included police and security force abuses, such as extrajudicial killings, disappearances, torture, arbitrary arrest and detention, rape, harsh and life-threatening prison conditions, and lengthy pretrial detention.
In India, a child has the right to be protected from neglect, exploitation, and abuse at home and elsewhere. Children have the right to be protected from the incidence of abuse, exploitation, violence, neglect, commercial sexual exploitation, trafficking, child labour, and harmful traditional practices.
Nanotechnology, also shortened to nanotech, is the use of matter on an atomic, molecular, and supramolecular scale for industrial purposes.
WHAT IS NANOTECHNOLOGY?
Nanotechnology is the manipulation of matter on a near-atomic scale to produce new structures, materials and devices. Nanotechnology is generally defined as engineered structures, devices, and systems. Nanomaterials are defined as those things that have a length scale between 1 and 100 nanometers.
NANOTECHNOLOGY USED IN:-
* Food security. Nanosensors in packaging can detect salmonella and other contaminants in food.
* Medicine.
* Energy.
* Automotive.
* Environment.
* Electronics.
* Textiles.
* Cosmetics.
IS NANOTECHNOLOGY THE FUTURE:-
Nanotechnology is an emerging science which is expected to have rapid and strong future developments. It is predicted to contribute significantly to economic growth and job creation in the EU in the coming decades. According to scientists, nanotechnology is predicted to have four distinct generations of advancement.
NANO MEDICENE:-
Nanomedicine the application of nanomaterials and devices for addressing medical problems has demonstrated great potential for enabling improved diagnosis, treatment, and monitoring of many serious illnesses, including cancer, cardiovascular and neurological disorders, HIV/AIDS, and diabetes.
Politics work with inthe constitution of India.India is a parliamentary democratic republic in which the president of India is the head of state and the prime minister of India is the head of government.
POLITICAL PARTY IN INDIA:-
As per 23, September 2021 the election commission of India total number of parties registered was 2858, with 8 national parties, 54 state parties and 2796 unrecognised parties.
THREE BRANCHES OF INDIAN GOVERNMENT:-
* Executive.
*Legislative.
* Judiciary.
EXECUTIVE:-
Executive type of Indian government include,
* president of india.
* prime minister of India.
* union cabinet.
* council of ministers.
* Bureaucrats.
PRESIDENT OF INDIA:-
President of India is the highest post and is the constitutional head of the country. According to our constitution, President is the first citizen of our country and a symbol of unity and integrity.President is also responsible for appointing other executives and judicial members in the country like the Chief Justice of India, Judges of all the High courts, the Election Commissioner of India and states too. President of India is also the Commander in Chief of all the Indian forces i.e. Indian Army, Indian Navy, and Indian Air force.
PRIME MINISTER OF INDIA:-
Prime Minister is the chief of the Central government of India and also acts as the advisor of President. He is also head of the Council of Ministers and is responsible for appointing or dismissing any minister from the council. In case Prime Minister resigns from his office or dies during his tenure then the cabinet will automatically dissolve.
UNION CABINET:-
Union Cabinet consists of the Prime Minister and Cabinet Ministers. This is the decision making body in the central government.The Cabinet Minister cannot make a law concerning his department on its own, he can only propose the decision and then the Union Cabinet will make the final law.
COUNCIL OF MINISTERS:-
Council of Ministers works under the Union Cabinet. All the members of the Union Cabinet are members of the Council of Ministers and here the Minister of States appointed by the President on the advice of the Prime Minister.
BUREAUCRATS:-
Bureaucrats are selected and appointed by Union Public Service Commission and for states there is a State Public Service Commission. They are responsible for implementing the laws and all the other functions of government. Bureaucrats consist of IAS, IPS, IFS and other officials leading various government agencies.
LEGISLATIVE:-
Legislative type of government include president,lok sabha also known as lower house,Rajya sabha also known as upper house.
LOK SABHA:-
Lok Sabha is more powerful in both the houses. Members of Lok Sabha are elected directly by citizens of India. There are total 530 members from the states and 20 members from the union territories. They are elected in general election. Their term is five years.
RAJYA SABHA:-
There cannot be more than 250 members in the Rajya Sabha. Members of Rajya Sabha are elected by state legislative assemblies out of which 12 members are directly appointed by President who come from different backgrounds like Literature, Art, Social Services etc.
JUDICIARY:-
Judiciary type of government includes supreme court,high court and district court.
SUPREME COURT:-
Supreme Court is the highest judicial body of the country. The decision by Supreme Court of India is acceptable by all judicial bodies and no other judicial body has the power to change a decision by Supreme Court. Indian judiciary is an independent body and is not affected by Legislature or Executive. Supreme Court has the power to question any decision made by Legislative Bodies or Executive if the decision is not in accordance with the Constitution of India.supreme court of india consists of 34 judges maximum in which there can be a Chief Justice and 33 other judges.
HIGH COURT:-
The High Court is the highest judicial body of a state. High court functions under Supreme Court. High court maintains the rule of law in the particular state and in case of two small states there can be one common High court.
DISTRICT COURT:-
District court or sub-ordinate courts function under High court. They maintain rule of law in a particular district or locality in which they function. They look after the civil and criminal matters of that particular region.
Bitcoin is a decentralized digital currency created in January 2009. It follows the ideas set out in a white paper by the mysterious and pseudonymous Satoshi Nakamoto. The identity of the person or persons who created the technology is still a mystery. Bitcoin offers the promise of lower transaction fees than traditional online payment mechanisms do, and unlike government-issued currencies, it is operated by a decentralized authority.
Bitcoin is known as a type of cryptocurrency because it uses cryptography to keep it secure. There are no physical bitcoins, only balances kept on a public ledger that everyone has transparent access to (although each record is encrypted). All Bitcoin transactions are verified by a massive amount of computing power via a process known as “mining.” Bitcoin is not issued or backed by any banks or governments, nor is an individual bitcoin valuable as a commodity. Despite it not being legal tender in most parts of the world, Bitcoin is very popular and has triggered the launch of hundreds of other cryptocurrencies, collectively referred to as altcoins. Bitcoin is commonly abbreviated as BTC when traded.
Understanding Bitcoin The Bitcoin system is a collection of computers (also referred to as “nodes” or “miners”) that all run Bitcoin’s code and store its blockchain. Figuratively speaking, a blockchain can be thought of as a collection of blocks. In each block is a collection of transactions. Because all of the computers running the blockchain have the same list of blocks and transactions and can transparently see these new blocks as they’re filled with new Bitcoin transactions, no one can cheat the system.
Anyone whether they run a Bitcoin “node” or not can see these transactions occurring in real time. To achieve a nefarious act, a bad actor would need to operate 51% of the computing power that makes up Bitcoin. Bitcoin has around 13,768 full nodes, as of mid-November 2021, and this number is growing, making such an attack quite unlikely.
But if an attack were to happen, Bitcoin miners the people who take part in the Bitcoin network with their computers—would likely split off to a new blockchain, making the effort the bad actor put forth to achieve the attack a waste.
Balances of Bitcoin tokens are kept using public and private “keys,” which are long strings of numbers and letters linked through the mathematical encryption algorithm that creates them. The public key (comparable to a bank account number) serves as the address published to the world and to which others may send Bitcoin.
The private key (comparable to an ATM PIN) is meant to be a guarded secret and only used to authorize Bitcoin transmissions. Bitcoin keys should not be confused with a Bitcoin wallet, which is a physical or digital device that facilitates the trading of Bitcoin and allows users to track ownership of coins. The term “wallet” is a bit misleading because Bitcoin’s decentralized nature means it is never stored “in” a wallet, but rather distributed on a blockchain.
Peer-to-Peer Technology: Bitcoin is one of the first digital currencies to use peer-to-peer (P2P) technology to facilitate instant payments. The independent individuals and companies who own the governing computing power and participate in the Bitcoin network Bitcoin “miners” are in charge of processing the transactions on the blockchain and are motivated by rewards (the release of new Bitcoin) and transaction fees paid in Bitcoin.
These miners can be thought of as the decentralized authority enforcing the credibility of the Bitcoin network. New bitcoins are released to miners at a fixed but periodically declining rate. There are only 21 million bitcoins that can be mined in total. As of November 2021, there are over 18.875 million Bitcoin in existence and less than 2.125 million Bitcoin left to mine.
Bitcoin Mining:
Bitcoin mining is the process by which Bitcoin is released into circulation. Generally, mining requires solving computationally difficult puzzles to discover a new block, which is added to the blockchain.
Bitcoin mining adds and verifies transaction records across the network. Miners are rewarded with some Bitcoin; the reward is halved every 210,000 blocks. The block reward was 50 new bitcoins in 2009. On May 11, 2020, the third halving occurred, bringing the reward for each block discovery down to 6.25 bitcoins Early Timeline of Bitcoin Aug. 18, 2008 The domain name Bitcoin.org is registered. Today, at least, this domain is WhoisGuard Protected, meaning the identity of the person who registered it is not public information.
Oct. 31, 2008 A person or group using the name Satoshi Nakamoto makes an announcement to the Cryptography Mailing List at metzdowd.com: “I’ve been working on a new electronic cash system that’s fully peer-to-peer, with no trusted third party.” This now-famous white paper published on Bitcoin.org, entitled “Bitcoin: A Peer-to-Peer Electronic Cash System,” would become the Magna Carta for how Bitcoin operates today.
Jan. 3, 2009 The first Bitcoin block is mined Block 0. This is also known as the “genesis block” and contains the text: “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks,” perhaps as proof that the block was mined on or after that date, and perhaps also as relevant political commentary.
Jan. 8, 2009 The first version of the Bitcoin software is announced to the Cryptography Mailing List.
Jan. 9, 2009 Block 1 is mined, and Bitcoin mining commences in earnest.
Bitcoin employment opportunities: Those who are self-employed can get paid for a job related to Bitcoin. There are several ways to achieve this, such as creating any internet service and adding your Bitcoin wallet address to the site as a form of payment. There are also several websites and job boards that are dedicated to digital currencies:
Why Is Bitcoin Valuable? Bitcoin’s price has risen exponentially in just over a decade, from less than $1 in 2011 to more than $68,000 as of November 2021. Its value is derived from several sources, including its relative scarcity, market demand, and marginal cost of production. Thus, even though it is intangible, Bitcoin commands a high valuation, with a total market cap of $1.11 trillion as of November 2021
Is Bitcoin a Scam? Even though Bitcoin is virtual and can’t be touched, it is certainly real. Bitcoin has been around for more than a decade and the system has proved itself to be robust. The computer code that runs the system, moreover, is open source and can be downloaded and analyzed by anybody for bugs or evidence of nefarious intent. Of course, fraudsters may attempt to swindle people out of their Bitcoin or hack sites such as crypto exchanges, but these are flaws in human behavior or third-party applications and not in Bitcoin itself.
How Many Bitcoins Are There? The maximum number of bitcoins that will ever be produced is 21 million, and the last bitcoin will be mined at some point around the year 2140. As of November 2021, more than 18.85 million (almost 90%) of those bitcoins have been mined.18 Moreover, researchers estimate that up to 20% of those bitcoins have been “lost” due to people forgetting their private key, dying without leaving any access instructions, or sending bitcoins to unusable addresses.
Where Can I Buy Bitcoin? There are several online exchanges that allow you to purchase Bitcoin. In addition, Bitcoin ATMs internet-connected kiosks that can be used to buy bitcoins with credit cards or cash have been popping up around the world. Or, if you know a friend who owns some bitcoins, they may be willing to sell them to you directly without any exchange at all.
Mumbai formerly called Bombay is a densely populated city on India’s west coast. A financial center, it’s India’s largest city. On the Mumbai Harbour waterfront stands the iconic Gateway of India stone arch, built by the British Raj in 1924.India’s share market is also in mumbai.Mumbai is also called as city of dreams.Mumbai is the seventh cheapest city in the world Mumbai is a huge and populous city, the level of crime is high. Travelers can easily become victims so they need to avoid traveling alone on public transport or in taxis, especially at night. There have been reports of British tourists becoming the victims of a scam by taxi drivers.
DELHI:-
New Delhi is the capital of India and an administrative district of the National Capital Territory of Delhi. New Delhi is the seat of all three branches of the government of India, hosting the Rashtrapati Bhavan, Parliament House, and the Supreme Court of India.New delhi is a union territory.It is situated alongside River Yamuna and bordered by Haryana state on three sides and by Uttar Pradesh state to the east.Delhi is relatively safe in terms of petty crime, though pickpocketing can be a problem in crowded areas so keep your valuables safe. Roads are notoriously congested.New Delhi is best known as the location of India’s national government. New Delhi has great historical significance as it was home to powerful people, such as the Pāṇḍavas and the Mughals. The city has many historical monuments and tourist attractions as well as lively marketplaces and great food, such as chaat.The world wonder taj mahal also present in a New delhi.
BANGLORE:-
Bengaluru also called Bangalore is the capital of India’s southern Karnataka state. The center of India’s high-tech industry, the city is also known for its parks and nightlife. By Cubbon Park, Vidhana Soudha is a Neo-Dravidian legislative building.It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India.The current estimation of economy of Bangalore and its metropolitan area is US$ 110 billion making it India’s fourth richest metropolitan area.
KOLKATA:-
Kolkata formerly Calcutta is the capital of India’s West Bengal state. Founded as an East India Company trading post, it was India’s capital under the British Raj from 1773–1911. It is known for its grand colonial architecture, art galleries and cultural festivals. It’s also home to Mother House, headquarters of the Missionaries of Charity, founded by Mother Teresa, whose tomb is on site.Kolkata has gained the top spot in the list of the country’s safest cities for the year 2020.Kolkata is also known as the Black City.
CHENNAI:-
Chennai, on the Bay of Bengal in eastern India, is the capital of the state of Tamil Nadu. It is also called Madras.The Chennai Metropolitan Area is one of the largest municipal economies of India. More than one-third of India’s automobile industry being based in the city. Home to the Tamil film industry, Chennai is also known as a major film production centre. It is one of the 100 Indian cities to be developed as a smart city under the Smart Cities Mission.The world second largest beach is in Chennai and The zoological park.There is many place to visit in Chennai.
Social media marketing means selling or promoting the products of the company by using social media.For example social medias like Facebook Twitter Instagram Pinterest etc.
Uses of social media marketing:-
* It is used to sell or promote the products in the world level.
* With a strong social media strategy and the ability to create engaging content, marketers can engage their audience
* social media marketing allow people to get what they want.
EFFECTIVE TYPE OF SOCIAL MEDIA MARKETING:-
* Facebook advertising
* Twitter advertising
* Instagram advertising
* you tube advertising
* Pinterest advertising
* LinkedIn advertising.
BENEFITS OF SOCIAL MEDIA MARKETING:-
* Grow your sales and your fanbase.
* Use customer generated content for ads (which perform better, too!).
* Better target new and regular customers so the waste can be reduced.
DISADVANTAGES OF SOCIAL MEDIA MARKETING:-
* The main disadvantage of social media marketing is highly rely on ads.
* No direct interaction with the people to solve problems.
* One of main issue is security and privacy policy.Many of us can be faced.
CONCLUSION:-
There is no better marketing strategy for selling and promoting products than social media advertising. No other strategies can deliver consistent, scalable, quality leads and customers from day one that can supplement any promotional marketing.
Gandhi first proposed a flag to the Indian National Congress in 1921. The Indian flag was designed by Pingali Venkayya.In the flag the deep saffron colour is for courage and sacrifice white colour is for honesty, peace, and purity dark green colour is for faith and chivalry and the chakra in the middle is for vigilance, perseverance, and justice.This flag was accepted in 1947.Bhikaji Rustom Cama, the fiery lady who unfurled the first version of the Indian national flag a tricolour of green, saffron, and red stripes at the International Socialist Congress held at Stuttgart
The Indian flag was designed as A horizontal triband of India saffron, white, and India green; charged with a navy blue wheel with 24 spokes in the centre.The flag was proposed by Nehru at the Constituent Assembly on 22 July 1947 as a horizontal tricolour of deep saffron, white and dark green in equal proportions, with the Ashoka wheel in blue in the centre of the white band. From 26 January 2002, allowing private citizens to hoist the flag on any day of the year, subject to their safeguarding the dignity, honour and respect of the flag.
Global warming is defined as the global annual temperature has increased in total by a little more than 1 degree Celsius, or about 2 degrees Fahrenheit. Global warming is mainly because of the industrial revolution, burning plastics.
CAUSES OF GLOBAL WARMING:-
* Greenhouse Gases Are the Main Reasons for Global Warming.
* Another main reason for global warming is the industries.
* Deforestation is one of the reason for global warming.
* The smoke from vehicles.
EFFECTS OF GLOBAL WARMING:-
* Global warming cause increase in temperature that raises sea level can cause flood.
* Global warming raises temperature in the atmosphere.
* Melting of glaciers are one of the most threat for the earth
* If global warming cause increase in temperature then there is threat for availablity of water.
* It also cause some diseases like allergies,chest pain etc.
CONTROLLING MEASURES OF GLOBAL WARMING:-
* By decreasing deforestation and encouraging affrostration
Overview Are you a glass half-empty or half-full sort of person? Studies have demonstrated that both can impact your physical and mental health and that being a positive thinker is the better of the two.
A recent study followed 70,000 women from 2004 to 2012 and found that those who were optimistic had a significantly lower risk of dying from several major causes of death, including:
* heart disease * stroke * cancer, including breast, ovarian, lung, and colorectal cancers * infection * respiratory diseases
Other proven benefits of thinking positively include:
* better quality of life * higher energy levels * better psychological and physical health * faster recovery from injury or illness fewer colds * lower rates of depression * better stress management and coping skills * longer life span
Positive thinking isn’t magic and it won’t make all of your problems disappear. What it will do is make problems seem more manageable and help you approach hardships in a more positive and productive way.
How to think positive thoughts: Positive thinking can be achieved through a few different techniques that have been proven effective, such as positive self-talk and positive imagery.
Here are some tips that to get you started that can help you train your brain how to think positively.
Focus on the good things: Challenging situations and obstacles are a part of life. When you’re faced with one, focus on the good things no matter how small or seemingly insignificant they seem. If you look for it, you can always find the proverbial silver lining in every cloud — even if it’s not immediately obvious. For example, if someone cancels plans, focus on how it frees up time for you to catch up on a TV show or other activity you enjoy.
Practice gratitude: Practicing gratitude has been shown to reduce stress, improve self-esteem, and foster resilience even in very difficult times. Think of people, moments, or things that bring you some kind of comfort or happiness and try to express your gratitude at least once a day. This can be thanking a co-worker for helping with a project, a loved one for washing the dishes, or your dog for the unconditional love they give you.
Keep a gratitude journal: Studies Trusted Source have found that writing down the things you’re grateful for can improve your optimism and sense of well-being. You can do this by writing in a gratitude journal every day, or jotting down a list of things you’re grateful for on days you’re having a hard time.
Open yourself up to humor: Studies have found that laughter lowers stress, anxiety, and depression. It also improves coping skills, mood, and self-esteem.
Be open to humor in all situations, especially the difficult ones, and give yourself permission to laugh. It instantly lightens the mood and makes things seem a little less difficult. Even if you’re not feeling it; pretending or forcing yourself to laugh can improve your mood and lower stress.
Spend time with positive people: Negativity and positivity have been shown to be contagious. Consider the people with whom you’re spending time. Have you noticed how someone in a bad mood can bring down almost everyone in a room? A positive person has the opposite effect on others.
Being around positive people has been shown to improve self-esteem and increase your chances of reaching goals. Surround yourself with people who will lift you up and help you see the bright side.
Practice positive self-talk: We tend to be the hardest on ourselves and be our own worst critic. Over time, this can cause you to form a negative opinion of yourself that can be hard to shake. To stop this, you’ll need to be mindful of the voice in your head and respond with positive messages, also known as positive self-talk.
Research shows that even a small shift in the way you talk to yourself can influence your ability to regulate your feelings, thoughts, and behavior under stress.
Here’s an example of positive self-talk: Instead of thinking “I really messed that up,” try “I’ll try it again a different way.”
Identify your areas of negativity: Take a good look at the different areas of your life and identify the ones in which you tend to be the most negative. Not sure? Ask a trusted friend or colleague. Chances are, they’ll be able to offer some insight. A co-worker might notice that you tend to be negative at work. Your spouse may notice that you get especially negative while driving. Tackle one area at a time.
Start every day on a positive note: Create a ritual in which you start off each day with something uplifting and positive. Here are a few ideas:
* Tell yourself that it’s going to be a great day or any other positive affirmation. * Listen to a happy and positive song or playlist. * Share some positivity by giving a compliment or doing something nice for someone.
How to think positive when everything is going wrong: Trying to be positive when you’re grieving or experiencing other serious distress can seem impossible. During these times, it’s important to take the pressure off of yourself to find the silver lining. Instead, channel that energy into getting support from others.
Positive thinking isn’t about burying every negative thought or emotion you have or avoiding difficult feelings. The lowest points in our lives are often the ones that motivate us to move on and make positive changes.
When going through such a time, try to see yourself as if you were a good friend in need of comfort and sound advice. What would you say to her? You’d likely acknowledge her feelings and remind her she has every right to feel sad or angry in her situation, and then offer support with a gentle reminder that things will get better.
Side effects of negative thinking: Negative thinking and the many feelings that can accompany it, such as pessimism, stress, and anger, can cause a number of physical symptoms and increase your risk of diseases and a shortened lifespan.
Stress and other negative emotions trigger several processes in our bodies, including stress hormone release, metabolism, and immune function. Long periods of stress increase inflammation in your body, which has also been implicated in a number or serious diseases.
Some of the symptoms of stress include:
* headache * body aches * nausea * fatigue * difficulty sleeping Cynicism, stress, anger, and hostility have been linked to a higher risk of:
When to seek medical help: If you’re feeling consumed by negative thoughts and are having trouble controlling your emotions, see a doctor. You may benefit from medical help, such as positive psychology or therapy. Persistent negative thoughts can be caused by an underlying psychiatric condition that requires treatment.
Takeaway: You won’t be able to undo years of pessimism and negative thoughts overnight, but with some practice, you can learn how to approach things with a more positive outlook.
What is Education? The first thing that strikes in our minds when we think about education is gaining knowledge. Education is a tool which provides people with knowledge, skill, technique, information, enables them to know their rights and duties toward their family, society as well as the nation. It expands vision and outlook to see the world. It develops the capabilities to fight against injustice, violence, corruption and many other bad elements in the society.
Education gives us knodwledge of the world around us. It develops in us a perspective of looking at life. It is the most important element in the evolution of the nation. Without education, one will not explore new ideas. It means one will not able to develop the world because without ideas there is no creativity and without creativity, there is no development of the nation.
Importance of Education in Our Society Education is an important aspect that plays a huge role in the modern, industrialized world. People need a good education to be able to survive in this competitive world. Modern society is based on people who have high living standards and knowledge which allows them to implement better solutions to their problems.
Features of Education
Education empowers everyone. Some of the areas where education helps are: 1. Removing Poverty Education helps in removing poverty as if a person is educated, he can get a good job and fulfill all the basic needs & requirement of his family.
2. Safety and Security against Crime If a person is well-educated, he will not be fooled by anyone easily. An educated person is less prone to involve in domestic violence & other social evils. They enjoy healthy relationships in life. This means people are less susceptible to being cheated or becoming a victim of violence.
3. Prevention of Wars and Terrorism To lead a safe & secure life, one needs to understand the value of education in our daily life. One needs to take an active part in various educational activities. These types of productive activities provide knowledge to live a better life.
4. Commerce and Trade A good education doesn’t simply mean going to school or college & getting a degree. Trade & commerce of the country will also be flourished easily if its citizens are well-educated. Education helps to become self-dependent and build great confidence among them to accomplish difficult tasks. On getting an education, their standard of life gets improved.
5. Law and Order Education enables the process of the Nation’s Fast Development. If you have a good education, you can serve your country well. It develops a good political ideology.
6. Women Empowerment Education also helps in empowering women. Certain old customs like Not Remarrying Widows, Sati Pratha, Child Marriage, Dowry System etc. can be demolished with the power of education. Women, if educated, can raise voice against the injustice done to her. This will bring a lot of development in society as well as in the nation. In short, Right to Freedom of speech & expression can be used in the right way if all women will become educated.
7. Upliftment of economically weaker sections of society Education is the most important ingredient to change the world. Due to lack of education, many illiterate people suffer the hardships of discrimination, untouchability & injustices prevailing in the society but with the advancement of a good education. If all the people will be educated; this ultimately leads to the upliftment of economically weaker sections of society.
8. Communications The relation between education & communication is apparent. Good education helps to communicate better with other people. It also improves our communication skills such as speech, body language etc. A person who is educated feels confident within him to confront or give a speech in front of a large public or can held a meeting or seminar.
One of the most important benefits of education is that it improves persnal lives and helps the society to run smoothly. By providing education, poverty can be removed and every person can provide their contribution to developing the country.
The first thing that strikes in our minds when we think about education is gaining knowledge. Education is a tool which provides people with knowledge, skill, technique, information, enables them to know their rights and duties toward their family, society as well as the nation. It expands vision and outlook to see the world. It develops the capabilities to fight against injustice, violence, corruption and many other bad elements in the society.
Education gives us knowledge of the world around us. It develops in us a perspective of looking at life. It is the most important element in the evolution of the nation. Without education, one will not explore new ideas. It means one will not able to develop the world because without ideas there is no creativity and without creativity, there is no development of the nation.
Importance of Education in Our Society Education is an important aspect that plays a huge role in the modern, industrialized world. People need a good education to be able to survive in this competitive world. Modern society is based on people who have high living standards and knowledge which allows them to implement better solutions to their problems.
Features of Education
Education empowers everyone. Some of the areas where education helps are: 1. Removing Poverty Education helps in removing poverty as if a person is educated, he can get a good job and fulfill all the basic needs & requirement of his family.
2. Safety and Security against Crime If a person is well-educated, he will not be fooled by anyone easily. An educated person is less prone to involve in domestic violence & other social evils. They enjoy healthy relationships in life. This means people are less susceptible to being cheated or becoming a victim of violence.
3. Prevention of Wars and Terrorism To lead a safe & secure life, one needs to understand the value of education in our daily life. One needs to take an active part in various educational activities. These types of productive activities provide knowledge to live a better life.
4. Commerce and Trade A good education doesn’t simply mean going to school or college & getting a degree. Trade & commerce of the country will also be flourished easily if its citizens are well-educated. Education helps to become self-dependent and build great confidence among them to accomplish difficult tasks. On getting an education, their standard of life gets improved.
5. Law and Order Education enables the process of the Nation’s Fast Development. If you have a good education, you can serve your country well. It develops a good political ideology.
6. Women Empowerment Education also helps in empowering women. Certain old customs like Not Remarrying Widows, Sati Pratha, Child Marriage, Dowry System etc. can be demolished with the power of education. Women, if educated, can raise voice against the injustice done to her. This will bring a lot of development in society as well as in the nation. In short, Right to Freedom of speech & expression can be used in the right way if all women will become educated.
7. Upliftment of economically weaker sections of society Education is the most important ingredient to change the world. Due to lack of education, many illiterate people suffer the hardships of discrimination, untouchability & injustices prevailing in the society but with the advancement of a good education. If all the people will be educated; this ultimately leads to the upliftment of economically weaker sections of society.
8. Communications The relation between education & communication is apparent. Good education helps to communicate better with other people. It also improves our communication skills such as speech, body language etc. A person who is educated feels confident within him to confront or give a speech in front of a large public or can held a meeting or seminar.
One of the most important benefits of education is that it improves personal lives and helps the society to run smoothly. By providing education, poverty can be removed and every person can provide their contribution to developing the country.
Pluto (minor-planet designation: 134340 Pluto) is a dwarf planet in the Kuiper belt, a ring of bodies beyond the orbit of Neptune. It was the first and the largest Kuiper belt object to be discovered. After Pluto was discovered in 1930, it was declared to be the ninth planet from the Sun. Beginning in the 1990s, its status as a planet was questioned following the discovery of several objects of similar size in the Kuiper belt and the scattered disc, including the dwarf planet Eris. This led the International Astronomical Union (IAU) in 2006 to formally define the term planet excluding Pluto and reclassifying it as a dwarf planet.
Pluto is the ninth-largest and tenth-most-massive known object directly orbiting the Sun. It is the largest known trans-Neptunian object by volume but is less massive than Eris. Like other Kuiper belt objects, Pluto is primarily made of ice and rock and is relatively small—one-sixth the mass of the Moon and one-third its volume. It has a moderately eccentric and inclined orbit during which it ranges from 30 to 49 astronomical units or AU (4.4–7.4 billion km) from the Sun. This means that Pluto periodically comes closer to the Sun than Neptune, but a stable orbital resonance with Neptune prevents them from colliding. Light from the Sun takes 5.5 hours to reach Pluto at its average distance (39.5 AU).
Pluto has five known moons: Charon (the largest, with a diameter just over half that of Pluto), Styx, Nix, Kerberos, and Hydra. Pluto and Charon are sometimes considered a binary system because the barycenter of their orbits does not lie within either body.
The New Horizons spacecraft performed a flyby of Pluto on July 14, 2015, becoming the first and, to date, only spacecraft to do so. During its brief flyby, New Horizons made detailed measurements and observations of Pluto and its moons. In September 2016, astronomers announced that the reddish-brown cap of the north pole of Charon is composed of tholins, organic macromolecules that may be ingredients for the emergence of life, and produced from methane, nitrogen and other gases released from the atmosphere of Pluto and transferred 19,000 km (12,000 mi) to the orbiting moon.
Orbit :
Pluto was discovered in 1930 near the star δ Geminorum, and merely coincidentally crossing the ecliptic at this time of discovery. Pluto moves about 7 degrees east per decade with small apparent retrograde motion as seen from Earth. Pluto was closer to the Sun than Neptune between 1979 and 1999.
Pluto’s orbital period is currently about 248 years. Its orbital characteristics are substantially different from those of the planets, which follow nearly circular orbits around the Sun close to a flat reference plane called the ecliptic. In contrast, Pluto’s orbit is moderately inclined relative to the ecliptic (over 17°) and moderately eccentric (elliptical). This eccentricity means a small region of Pluto’s orbit lies closer to the Sun than Neptune’s. The Pluto–Charon barycenter came to perihelion on September 5, 1989, and was last closer to the Sun than Neptune between February 7, 1979, and February 11, 1999.
Although the 3:2 resonance with Neptune (see below) is maintained, Pluto’s inclination and eccentricity behave in a chaotic manner. Computer simulations can be used to predict its position for several million years (both forward and backward in time), but after intervals much longer than the Lyapunov time of 10–20 million years, calculations become unreliable: Pluto is sensitive to immeasurably small details of the Solar System, hard-to-predict factors that will gradually change Pluto’s position in its orbit.
Rotation:
Pluto’s rotation period, its day, is equal to 6.387 Earth days. Like Uranus, Pluto rotates on its “side” in its orbital plane, with an axial tilt of 120°, and so its seasonal variation is extreme; at its solstices, one-fourth of its surface is in continuous daylight, whereas another fourth is in continuous darkness.The reason for this unusual orientation has been debated. Research from the University of Arizona has suggested that it may be due to the way that a body’s spin will always adjust to minimise energy. This could mean a body reorienting itself to put extraneous mass near the equator and regions lacking mass tend towards the poles. This is called polar wander. According to a paper released from the University of Arizona, this could be caused by masses of frozen nitrogen building up in shadowed areas of the dwarf planet. These masses would cause the body to reorient itself, leading to its unusual axial tilt of 120°. The buildup of nitrogen is due to Pluto’s vast distance from the Sun. At the equator, temperatures can drop to −240 °C (−400.0 °F; 33.1 K), causing nitrogen to freeze as water would freeze on Earth. The same effect seen on Pluto would be observed on Earth were the Antarctic ice sheet several times larger.
Atmosphere:
Pluto has a tenuous atmosphere consisting of nitrogen (N2),methane (CH4), and carbon monoxide (CO), which are in equilibrium with their ices on Pluto’s surface. According to the measurements by New Horizons, the surface pressure is about 1 Pa (10 μbar),roughly one million to 100,000 times less than Earth’s atmospheric pressure.
It was initially thought that, as Pluto moves away from the Sun, its atmosphere should gradually freeze onto the surface; studies of New Horizons data and ground-based occultations show that Pluto’s atmospheric density increases, and that it likely remains gaseous throughout Pluto’s orbit. New Horizons observations showed that atmospheric escape of nitrogen to be 10,000 times less than expected.Alan Stern has contended that even a small increase in Pluto’s surface temperature can lead to exponential increases in Pluto’s atmospheric density; from 18 hPa to as much as 280hPa (three times that of Mars to a quarter that of the Earth). At such densities, nitrogen could flow across the surface as liquid.Just like sweat cools the body as it evaporates from the skin, the sublimation of Pluto’s atmosphere cools its surface.The presence of atmospheric gases was traced up to 1670 kilometers high; the atmosphere does not have a sharp upper boundary.
Satellites:
Pluto has five known natural satellites. The closest to Pluto is Charon. First identified in 1978 by astronomer James Christy, Charon is the only moon of Pluto that may be in hydrostatic equilibrium. Charon’s mass is sufficient to cause the barycenter of the Pluto–Charon system to be outside Pluto. Beyond Charon there are four much smaller circumbinary moons. In order of distance from Pluto they are Styx, Nix, Kerberos, and Hydra. Nix and Hydra were both discovered in 2005, Kerberos was discovered in 2011,and Styx was discovered in 2012.The satellites’ orbits are circular (eccentricity < 0.006) and coplanar with Pluto’s equator (inclination < 1°),and therefore tilted approximately 120° relative to Pluto’s orbit. The Plutonian system is highly compact: the five known satellites orbit within the inner 3% of the region where prograde orbits would be stable.
Origin:
Pluto’s origin and identity had long puzzled astronomers. One early hypothesis was that Pluto was an escaped moon of Neptune knocked out of orbit by Neptune’s largest current moon, Triton. This idea was eventually rejected after dynamical studies showed it to be impossible because Pluto never approaches Neptune in its orbit.
Pluto’s true place in the Solar System began to reveal itself only in 1992, when astronomers began to find small icy objects beyond Neptune that were similar to Pluto not only in orbit but also in size and composition. This trans-Neptunian population is thought to be the source of many short-period comets. Pluto is now known to be the largest member of the Kuiper belt,a stable belt of objects located between 30 and 50 AU from the Sun. As of 2011, surveys of the Kuiper belt to magnitude 21 were nearly complete and any remaining Pluto-sized objects are expected to be beyond 100 AU from the Sun. Like other Kuiper-belt objects (KBOs), Pluto shares features with comets; for example, the solar wind is gradually blowing Pluto’s surface into space.It has been claimed that if Pluto were placed as near to the Sun as Earth, it would develop a tail, as comets do.This claim has been disputed with the argument that Pluto’s escape velocity is too high for this to happen. It has been proposed that Pluto may have formed as a result of the agglomeration of numerous comets and Kuiper-belt objects.
sensor is a device that produces an output signal for the purpose of sensing of a physical phenomenon.
In the broadest definition, a sensor is a device, module, machine, or subsystem that detects events or changes in its environment and sends the information to other electronics, frequently a computer processor. Sensors are always used with other electronics.
Sensors are used in everyday objects such as touch-sensitive elevator buttons (tactile sensor) and lamps which dim or brighten by touching the base, and in innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure and flow measurement, for example into MARG sensors.
Analog sensors such as potentiometers and force-sensing resistors are still widely used. Their applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. There is a wide range of other sensors that measure chemical and physical properties of materials, including optical sensors for refractive index measurement, vibrational sensors for fluid viscosity measurement, and electro-chemical sensors for monitoring pH of fluids.
A sensor’s sensitivity indicates how much its output changes when the input quantity it measures changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, its sensitivity is 1 cm/°C (it is basically the slope dy/dx assuming a linear characteristic). Some sensors can also affect what they measure; for instance, a room temperature thermometer inserted into a hot cup of liquid cools the liquid while the liquid heats the thermometer. Sensors are usually designed to have a small effect on what is measured; making the sensor smaller often improves this and may introduce other advantages.
Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a significantly faster measurement time and higher sensitivity compared with macroscopic approaches. Due to the increasing demand for rapid, affordable and reliable information in today’s world, disposable sensors low-cost and easy‐to‐use devices for short‐term monitoring or single‐shot measurements have recently gained growing importance. Using this class of sensors, critical analytical information can be obtained by anyone, anywhere and at any time, without the need for recalibration and worrying about contamination.
Classification of measurement errors:
A good sensor obeys the following rules:
* it is sensitive to the measured property * it is insensitive to any other property likely to be encountered in its application, and * it does not influence the measured property. Most sensors have a linear transfer function. The sensitivity is then defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is a constant with the units [V/K]. The sensitivity is the slope of the transfer function. Converting the sensor’s electrical output (for example V) to the measured units (for example K) requires dividing the electrical output by the slope (or multiplying by its reciprocal). In addition, an offset is frequently added or subtracted. For example, −40 must be added to the output if 0 V output corresponds to −40 C input.
For an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter.
Sensor deviations :
Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy:
* Since the range of the output signal is always limited, the output signal will eventually reach a minimum or maximum when the measured property exceeds the limits. The full scale range defines the maximum and minimum values of the measured property.[citation needed] * The sensitivity may in practice differ from the value specified. This is called a sensitivity error. This is an error in the slope of a linear transfer function. * If the output signal differs from the correct value by a constant, the sensor has an offset error or bias. This is an error in the y-intercept of a linear transfer function. * Nonlinearity is deviation of a sensor’s transfer function from a straight line transfer function. Usually, this is defined by the amount the output differs from ideal behavior over the full range of the sensor, often noted as a percentage of the full range. * Deviation caused by rapid changes of the measured property over time is a dynamic error. Often, this behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal. * If the output signal slowly changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor. * Noise is a random deviation of the signal that varies in time. * A hysteresis error causes the output value to vary depending on the previous input values. If a sensor’s output is different depending on whether a specific input value was reached by increasing vs. decreasing the input, then the sensor has a hysteresis error.
Resolution:
The sensor resolution or measurement resolution is the smallest change that can be detected in the quantity that it is being measured. The resolution of a sensor with a digital output is usually the numerical resolution of the digital output. The resolution is related to the precision with which the measurement is made, but they are not the same thing. A sensor’s accuracy may be considerably worse than its resolution.
For example, the distance resolution is the minimum distance that can be accurately measured by any distance measuring devices. In a time-of-flight camera, the distance resolution is usually equal to the standard deviation (total noise) of the signal expressed in unit of length.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment.
Chemical sensor:
A chemical sensor is a self-contained analytical device that can provide information about the chemical composition of its environment, that is, a liquid or a gas phase. The information is provided in the form of a measurable physical signal that is correlated with the concentration of a certain chemical species (termed as analyte). Two main steps are involved in the functioning of a chemical sensor, namely, recognition and transduction. In the recognition step, analyte molecules interact selectively with receptor molecules or sites included in the structure of the recognition element of the sensor. Consequently, a characteristic physical parameter varies and this variation is reported by means of an integrated transducer that generates the output signal. A chemical sensor based on recognition material of biological nature is a biosensor. However, as syntheticbiomimetic materials are going to substitute to some extent recognition biomaterials, a sharp distinction between a biosensor and a standard chemical sensor is superfluous. Typical biomimetic materials used in sensor development are molecularlyimprinted polymers and aptamers.
Biosensor:
In biomedicine and biotechnology, sensors which detect analytes thanks to a biological component, such as cells, protein, nucleic acid or biomimetic polymers, are called biosensors. Whereas a non-biological sensor,even organic (carbon chemistry), for biological analytes is referred to as sensor or nanosensor. This terminology applies for both in-vitro and in vivo applications. The encapsulation of the biological component in biosensors, presents a slightly different problem that ordinary sensors; this can either be done by means of a semipermeable barrier, such as a dialysis membrane or a hydrogel, or a 3D polymermatrix, which either physically constrains the sensing macromolecule or chemically constrains the macromolecule by bounding it to the scaffold.
Neuromorphic sensors :
Neuromorphic sensors are sensors that physically mimic structures and functions of biological neural entities.[8] One example of this is the event camera.
MOS sensors :
Metal-oxide-semiconductor (MOS) technology originates from the MOSFET (MOS field-effect transistor, or MOS transistor) invented by Mohamed M. Atalla and Dawon Kahng in 1959, and demonstrated in 1960. MOSFET sensors (MOS sensors) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.
Biochemical sensors:
A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate field-effect transistor (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.
Image sensors:
MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985.The CMOS active-pixel sensor was later developed by Eric Fossum and his team in the early 1990s.
Monitoring sensors:
MOS monitoring sensors are used for house monitoring, office and agriculture monitoring, traffic monitoring (including car speed, traffic jams, and traffic accidents), weather monitoring (such as for rain, wind, lightning and storms), defense monitoring, and monitoring temperature, humidity, air pollution, fire, health, security and lighting.MOS gas detector sensors are used to detect carbon monoxide, sulfur dioxide, hydrogen sulfide, ammonia, and other gas substances.Other MOS sensors include intelligent sensors and wireless sensor network (WSN) technology
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data.
Data science is a “concept to unify statistics, data analysis, informatics, and their related methods” in order to “understand and analyze actual phenomena” with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, information science, and domain knowledge. However, data science is different from computer science and information science. Turing Award winner Jim Gray imagined data science as a “fourth paradigm” of science (empirical, theoretical, computational, and now data-driven) and asserted that “everything about science is changing because of the impact of information technology” and the data deluge.
A data scientist is someone who creates programming code, and combines it with statistical knowledge to create insights from data.
Foundations:
Data science is an interdisciplinary field focused on extracting knowledge from data sets, which are typically large (see big data), and applying the knowledge and actionable insights from data to solve problems in a wide range of application domains.The field encompasses preparing data for analysis, formulating data science problems, analyzing data, developing data-driven solutions, and presenting findings to inform high-level decisions in a broad range of application domains. As such, it incorporates skills from computer science, statistics, information science, mathematics, information visualization, data integration, graphic design, complex systems, communication and business.Statistician Nathan Yau, drawing on Ben Fry, also links data science to human-computer interaction: users should be able to intuitively control and explore data.In 2015, the American Statistical Association identified database management, statistics and machine learning, and distributed and parallel systems as the three emerging foundational professional communities.
Relationship to statistics :
Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data.Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g. images) and emphasizes prediction and action.Andrew Gelman of Columbia University has described statistics as a nonessential part of data science. Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing, and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data science program. He describes data science as an applied field growing out of traditional statistics.In summary, data science can be therefore described as an applied branch of statistics.
Etymology:
In 1962, John Tukey described a field he called “data analysis“, which resembles modern data science. In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing, C.F. Jeff Wu used the term Data Science for the first time as an alternative name for statistics.Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.
The term “data science” has been traced back to 1974, when Peter Naur proposed it as an alternative name for computer science.In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. However, the definition was still in flux. After the 1985 lecture in the Chinese Academy of Sciences in Beijing, in 1997C.F. Jeff Wu again suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting, or limited to describing data.In 1998,Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.
During the 1990s, popular terms for the process of finding patterns in datasets (which were increasingly large) included “knowledge discovery” and “data mining”.
Modern usage:
The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. In a 2001 paper, he advocated an expansion of statistics beyond theory into technical areas; because this would significantly change the field, it warranted a new name.”Data science” became more widely used in the next few years: in 2002, the Committee on Data for Science and Technology launched Data Science Journal. In 2003, Columbia University launched The Journal of Data Science. In 2014, the American Statistical Association’s Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.
The professional title of “data scientist” has been attributed to DJ Patil and Jeff Hammerbacher in 2008. Though it was used by the National Science Board in their 2005 report, “Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century,” it referred broadly to any key role in managing a digital data collection.
Market:
Big data is becoming a tool for businesses and companies of all sizes. The availability and interpretation of big data has altered the business models of old industries and enabled the creation of new ones.Data scientists are responsible for breaking down big data into usable information and creating software and algorithms that help companies and organizations determine optimal operations.
An aircraft is a vehicle or machine that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the dynamic lift of an airfoil, or in a few cases the downward thrust from jet engines. Common examples of aircraft include airplanes, helicopters, airships (including blimps), gliders, paramotors, and hot air balloons.
The human activity that surrounds aircraft is called aviation. The science of aviation, including designing and building aircraft, is called aeronautics. Crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion, usage and others.
History :
Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent and safe descent in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras:
* Pioneers of flight, from the earliest experiments to 1914. * First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. * Second World War, 1939 to 1945. * Postwar era, also called the Jet Age, 1945 to the present day.
Methods of lift
Lighter than air – aerostats
Aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large cells or canopies, filled with a relatively low-density gas such as helium,hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, it adds up to the same weight as the air that the craft displaces.
Small hot-air balloons, called sky lanterns, were first invented in ancient China prior to the 3rd century BC and used primarily in cultural celebrations, and were only the second type of aircraft to fly, the first being kites, which were first invented in ancient China over two thousand years ago.
During World War II, this shape was widely adopted for tethered balloons; in windy weather, this both reduces the strain on the tether and stabilizes the balloon. The nickname blimp was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered.
Heavier-than-air – aerodynes :
Heavier-than-air aircraft, such as airplanes, must find some way to push air or gas downwards so that a reaction occurs (by Newton’s laws of motion) to push the aircraft upwards. This dynamic movement through the air is the origin of the term. There are two ways to produce dynamic upthrust aerodynamic lift, and powered lift in the form of engine thrust.
Aerodynamic lift involving wings is the most common, with fixed-wing aircraft being kept in the air by the forward movement of wings, and rotorcraft by spinning wing-shaped rotors sometimes called rotary wings. A wing is a flat, horizontal surface, usually shaped in cross-section as an aerofoil. To fly, air must flow over the wing and generate lift. A flexible wing is a wing made of fabric or thin sheet material, often stretched over a rigid frame. A kite is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary.
Fixed-wing:
The forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly and were invented in China around 500 BC. Much aerodynamic research was done with kites before test aircraft, wind tunnels, and computer modelling programs became available.
The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853.
The practical, powered, fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright
Rotorcraft:
Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil section blades (a rotary wing) to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft.
Helicopters have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips.
Autogyros have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight.
Other methods of lift:
A lifting body is an aircraft body shaped to produce lift. If there are any wings, they are too small to provide significant lift and are used only for stability and control. Lifting bodies are not efficient: they suffer from high drag, and must also travel at high speed to generate enough lift to fly. Many of the research prototypes, such as the Martin Marietta X-24, which led up to the Space Shuttle, were lifting bodies, though the Space Shuttle is not, and some supersonic missiles obtain lift from the airflow over a tubular body. Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier Jump Jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others.
Size : The smallest aircraft are toys/recreational items, and nano aircraft.
The largest aircraft by dimensions and volume (as of 2016) is the 302 ft (92 m) long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to 90 mph (140 km/h; 78 kn), and an airborne endurance of two weeks with a payload of up to 22,050 lb (10,000 kg).
The largest aircraft by weight and largest regular fixed-wing aircraft ever built, as of 2016, is the Antonov An-225 Mriya. That Ukrainian-built six-engine Russian transport of the 1980s is 84 m (276 ft) long, with an 88 m (289 ft) wingspan. It holds the world payload record, after transporting 428,834 lb (194,516 kg) of goods, and has recently flown 100 t (220,000 lb) loads commercially. With a maximum loaded weight of 550–700 t (1,210,000–1,540,000 lb), it is also the heaviest aircraft built to date. It can cruise at 500 mph (800 km/h; 430 kn).
The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world’s second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over 380 t (840,000 lb). The 8-engine, piston/propeller Hughes H-4 Hercules “Spruce Goose” an American World War II wooden flying boat transport with a greater wingspan (94m/260ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78ft) — flew only one short hop in the late 1940s and never flew out of ground effect.
The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over 400 t (880,000 lb)),and the double-decker Airbus A380 “super-jumbo” jet airliner
Size and speed extremes:
Speeds : The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.6, exactly 3,292.8 m/s (11,854 km/h; 6,400.7 kn; 7,366 mph). The X-43A set that new mark, and broke its own world record of Mach 6.3, exactly 2,160.9 m/s (7,779 km/h; 4,200.5 kn; 4,834 mph), set in March 2004, on its third and final flight on 16 November 2004.
Prior to the X-43A, the fastest recorded powered airplane flight (and still the record for the fastest manned, powered airplane / fastest manned, non-spacecraft aircraft) was of the North American X-15A-2, rocket-powered airplane at Mach 6.72, or 2,304.96 m/s (8,297.9 km/h; 4,480.48 kn; 5,156.0 mph), on 3 October 1967. On one flight it reached an altitude of 354,300 ft (108,000 m).
The fastest known, production aircraft (other than rockets and missiles) currently or formerly operational (as of 2016) are:
The fastest fixed-wing aircraft, and fastest glider, is the Space Shuttle, a rocket-glider hybrid, which has re-entered the atmosphere as a fixed-wing glider at more than Mach 25, equal to 8,575 m/s (30,870 km/h; 16,668 kn; 19,180 mph). The fastest military airplane ever built: Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, known to fly beyond Mach 3.3, equal to 1,131.9 m/s (4,075 km/h; 2,200.2 kn; 2,532 mph). On 28 July 1976, an SR-71 set the record for the fastest and highest-flying operational aircraft with an absolute speed record of 2,193 mph (3,529 km/h;1,906 kn; 980 m/s) and an absolute altitude record of 85,068 ft (25,929 m).
Uses for aircraft:
Aircraft are produced in several different types optimized for various uses; militaryaircraft, which includes not just combat types but many types of supporting aircraft, and civil aircraft, which include all non-military types, experimental and model.
Military:
A military aircraft is any aircraft that is operated by a legal or insurrectionary armed service of any type.] Military aircraft can be either combat or non-combat:
Combat aircraft are aircraft designed to destroy enemy equipment using its own armament.Combat aircraft divide broadly into fighters and bombers, with several in-between types, such as fighter-bombers and attack aircraft, including attack helicopters. Non-combat aircraft are not designed for combat as their primary function, but may carry weapons for self-defense. Non-combat roles include search and rescue, reconnaissance, observation, transport, training, and aerial refueling. These aircraft are often variants of civil aircraft.
Civil:
Civil aircraft divide into commercial and general types, however there are some overlaps.
Commercial aircraft include types designed for scheduled and charter airline flights, carrying passengers, mail and other cargo. The larger passenger-carrying types are the airliners, the largest of which are wide-body aircraft. Some of the smaller types are also used in general aviation, and some of the larger types are used as VIP aircraft.
General aviation is a catch-all covering other kinds of private (where the pilot is not paid for time or expenses) and commercial use, and involving a wide range of aircraft types such as business jets (bizjets), trainers, homebuilt, gliders, warbirds and hot air balloons to name a few. The vast majority of aircraft today are general aviation types.
Model:
A model aircraft is a small unmanned type made to fly for fun, for static display, for aerodynamic research or for other purposes. A scale model is a replica of some larger design.
The field of electronics is a branch of physics and electrical engineering that deals with the emission, behaviour and effects of electrons using electronic devices. Electronics uses active devices to control electron flow by amplification and rectification, which distinguishes it from classical electrical engineering, which only uses passive effects such as resistance, capacitance and inductance to control electric current flow.
Electronics has hugely influenced the development of modern society. The identification of the electron in 1897, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age.Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from an radio antenna possible with a non-mechanical device. The growth of electronics was rapid, and by the early 1920s commercial radio broadcasting and communications were becoming widespread, and electronic amplifiers were being used in such diverse applications as long distance telephony and the music recording industry.
The next big technological step took several decades to appear, when Solid-state electronics emerged with the first working semiconductor transistor which was invented by William Shockley,Walter Houser Brattain and John Bardeen in 1947. The vacuum tube was no longer the only means of controlling electron flow. The MOSFET (MOS transistor) was subsequently invented in 1959, and was the first compact transistor that could be miniaturised and mass-produced. This played a key role in the emergence of microelectronics and the Digital Revolution. Today, electronic devices are universally used in Computers, telecommunications and signal processing employing Integrated circuits with sometimes millions of transistors on a single chip.
Electronic devices and components:
An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Components may be packaged singly, or in more complex groups as integrated circuits. Some common electronic components are capacitors, inductors, resistors, diodes, transistors, etc. Components are often categorized as active (e.g. transistors and thyristors) or passive (e.g. resistors, diodes, inductors and capacitors).
History of electronic components :
Vacuum tubes (Thermionic valves) were among the earliest electronic components. They were almost solely responsible for the electronics revolution of the first half of the twentieth century. They allowed for vastly more complicated systems and gave us radio, television, phonographs, radar, long-distance telephony and much more. They played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since that time, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.
The first working point-contact transistor was invented by John Bardeen and WalterHouser Brattain at Bell Labs in 1947. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
Types of circuits:
Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types. An important electronic technique in both analog and digital electronics involves the use of feedback. Among many other things this allows very linear amplifiers to be made with high gain, and digital circuits such as registers, computers and oscillators.
Analog circuits:
Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits.
The number of different analog circuits so far devised is huge, especially because a ‘circuit’ can be defined as anything from a single component, to systems containing thousands of components.
Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.
Digital circuits:
Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra, and are the basis of all digital computers. To most engineers, the terms “digital circuit”, “digital system” and “logic” are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labeled “0” and “1“. Often logic “0” will be a lower voltage and referred to as “Low” while logic “1” is referred to as “High“. However, some systems use the reverse definition (“0” is “High”) or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as he sees fit to facilitate his design. The definition of the levels as “0” or “1” is arbitrary.
Electronics theory :
Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.
Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.
Also important to electronics is the study and understanding of electromagnetic field theory.
Electronics lab:
Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer’s design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice.
Electronic systems design:
Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.
Electronics industry :
The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly.
Social media are interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks. While challenges to the definition of social media arise due to the variety of stand-alone and built-in social media services currently available, there are some common features:
* Social media are interactive Web 2.0Internet-based applications. * User-generated content such as text posts or comments, digital photos or videos, and data generated through all online interactions is the lifeblood of social media. * Users create service-specific profiles for the website or app that are designed and maintained by the social media organization. * Social media helps the development of online social networks by connecting a user’s profile with those of other individuals or groups
Users usually access social media services through web-based apps on desktops or download services that offer social media functionality to their mobile devices (e.g., smartphones and tablets). As users engage with these electronic services, they create highly interactive platforms which individuals, communities, and organizations can share, co-create, discuss, participate, and modify user-generated or self-curated content posted online. Additionally, social media are used to document memories; learn about and explore things; advertise oneself; and form friendships along with the growth of ideas from the creation of blogs, podcasts, videos, and gaming sites.This changing relationship between humans and technology is the focus of the emerging field of technological self-studies.Some of the most popular social media websites, with more than 100 million registered users, include Facebook (and its associated Facebook Messenger), TikTok, WeChat, Instagram, QZone, Weibo, Twitter, Tumblr, Baidu Tieba, and LinkedIn. Depending on interpretation, other popular platforms that are sometimes referred to as social media services include YouTube, QQ, Quora, Telegram, WhatsApp, Signal, LINE, Snapchat, Pinterest, Viber, Reddit, Discord, VK, Microsoft Teams, and more. Wikis are examples of collaborative content creation.
Many social media outlets differ from traditional media (e.g., print magazines and newspapers, TV, and radio broadcasting) in many ways, including quality,reach, frequency, usability, relevancy, and permanence. Additionally, social media outlets operate in a dialogic transmission system, i.e., many sources to many receivers, while traditional media outlets operate under a monologic transmission model (i.e., one source to many receivers). For instance, a newspaper is delivered to many subscribers and a radio station broadcasts the same programs to an entire city.
Since the dramatic expansion of the Internet, digital media or digital rhetoric can be used to represent or identify a culture. Studying how the rhetoric that exists in the digital environment has become a crucial new process for many scholars.
Observers have noted a wide range of positive and negative impacts when it comes to the use of social media. Social media can help to improve an individual’s sense of connectedness with real or online communities and can be an effective communication (or marketing) tool for corporations, entrepreneurs, non-profit organizations, advocacy groups, political parties, and governments. Observers have also seen that there has been a rise in social movements using social media as a tool for communicating and organizing in times of political unrest.
History of social media :
Early computing:
The PLATO system was launched in 1960, after being developed at the University of Illinois and subsequently commercially marketed by Control Data Corporation. It offered early forms of social media features with 1973-era innovations such as Notes, PLATO’s message-forum application; TERM-talk, its instant-messaging feature; Talkomatic, perhaps the first online chat room; News Report, a crowdsourced online newspaper, and blog; and Access Lists, enabling the owner of a note file or other application to limit access to a certain set of users, for example, only friends, classmates, or co-workers.
ARPANET, which first came online in 1967, had by the late-1970s developed a rich cultural exchange of non-government/business ideas and communication, as evidenced by the network etiquette (or ‘netiquette’) described in a 1982 handbook on computing at MIT’s Artificial Intelligence Laboratory.ARPANET evolved into the Internet following the publication of the first Transmission Control Protocol (TCP) specification, RFC 675 (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in 1974. This became the foundation of Usenet, conceived by Tom Truscott and Jim Ellis in 1979 at the University of North Carolina at Chapel Hill and Duke University, and established in 1980.
A precursor of the electronic bulletin board system (BBS), known as Community Memory, appeared by 1973. True electronic BBSs arrived with the Computer Bulletin Board System in Chicago, which first came online on February 16, 1978. Before long, most major cities had more than one BBS running on TRS-80, Apple II, Atari, IBM PC, Commodore 64, Sinclair, and similar personal computers. The IBM PC was introduced in 1981, and subsequent models of both Mac computers and PCs were used throughout the 1980s. Multiple modems, followed by specialized telecommunication hardware, allowed many users to be online simultaneously. Compuserve, Prodigy and AOL were three of the largest BBS companies and were the first to migrate to the Internet in the 1990s. Between the mid-1980s and the mid-1990s, BBSes numbered in the tens of thousands in North America alone. Message forums (a specific structure of social media) arose with the BBS phenomenon throughout the 1980s and early 1990s. When the World Wide Web (WWW, or ‘the web’) was added to the Internet in the mid-1990s, message forums migrated to the web, becoming Internet forums, primarily due to cheaper per-person access as well as the ability to handle far more people simultaneously than telco modem banks.
Digital imaging and semiconductor image sensor technology facilitated the development and rise of social media. Advances in metal-oxide-semiconductor (MOS) semiconductor device fabrication, reaching smaller micron and then sub-micron levels during the 1980s–1990s, led to the development of the NMOS (n-type MOS) active-pixel sensor (APS) at Olympus in 1985, and then the complementary MOS (CMOS) active-pixel sensor (CMOS sensor) at NASA’s Jet Propulsion Laboratory (JPL) in 1993.CMOS sensors enabled the mass proliferation of digital cameras and camera phones, which bolstered the rise of social media.
Social impacts
Disparity
The digital divide is a measure of disparity in the level of access to technology between households, socioeconomic levels or other demographic categories.People who are homeless, living in poverty, elderly people and those living in rural or remote communities may have little or no access to computers and the Internet; in contrast, middle class and upper-class people in urban areas have very high rates of computer and Internet access. Other models argue that within a modern information society, some individuals produce Internet content while others only consume it, which could be a result of disparities in the education system where only some teachers integrate technology into the classroom and teach critical thinking. While social media has differences among age groups, a 2010 study in the United States found no racial divide. Some zero-rating programs offer subsidized data access to certain websites on low-cost plans. Critics say that this is an anti-competitive program that undermines net neutrality and creates a “walled garden”for platforms like Facebook Zero. A 2015 study found that 65% of Nigerians, 61% of Indonesians, and 58% of Indians agree with the statement that “Facebook is the Internet” compared with only 5% in the US.
Political polarization:
According to the Pew Research Center, a majority of Americans at least occasionally receive news from social media. Because of algorithms on social media which filter and display news content which are likely to match their users’ political preferences, a potential impact of receiving news from social media includes an increase in political polarization due to selective exposure. Political polarization refers to when an individual’s stance on a topic is more likely to be strictly defined by their identification with a specific political party or ideology than on other factors. Selective exposure occurs when an individual favors information that supports their beliefs and avoids information that conflicts with their beliefs. A study by Hayat and Samuel-Azran conducted during the 2016 U.S. presidential election observed an “echo chamber” effect of selective exposure among 27,811 Twitter users following the content of cable news shows.The Twitter users observed in the study were found to have little interaction with users and content whose beliefs were different from their own, possibly heightening polarization effects. Another study using U.S. elections, conducted by Evans and Clark, revealed gender differences in the political use of Twitter between candidates.Whilst politics is a male dominated arena, on social media the situation appears to be the opposite, with women discussing policy issues at a higher rate than their male counter-parts. The study concluded that an increase in female candidates directly correlates to an increase in the amount of attention paid to policy issues, potentially heightening political polarization.
Stereotyping :
Recent research has demonstrated that social media, and media in general, have the power to increase the scope of stereotypes not only in children but people of all ages.Three researchers at Blanquerna University, Spain, examined how adolescents interact with social media and specifically Facebook. They suggest that interactions on the website encourage representing oneself in the traditional gender constructs, which helps maintain gender stereotypes. The authors noted that girls generally show more emotion in their posts and more frequently change their profile pictures, which according to some psychologists can lead to self-objectification. On the other hand, the researchers found that boys prefer to portray themselves as strong, independent, and powerful. For example, men often post pictures of objects and not themselves, and rarely change their profile pictures; using the pages more for entertainment and pragmatic reasons. In contrast, girls generally post more images that include themselves, friends and things they have emotional ties to, which the researchers attributed that to the higher emotional intelligence of girls at a younger age. The authors sampled over 632 girls and boys from the ages of 12–16 from Spain in an effort to confirm their beliefs. The researchers concluded that masculinity is more commonly associated with positive psychological well-being, while femininity displays less psychological well-being. Furthermore, the researchers discovered that people tend not to completely conform to either stereotype, and encompass desirable parts of both. Users of Facebook generally use their profiles to reflect that they are a “normal” person. Social media was found to uphold gender stereotypes both feminine and masculine. The researchers also noted that traditional stereotypes are often upheld by boys more so than girls. The authors described how neither stereotype was entirely positive, but most people viewed masculine values as more positive.
Effects on youth communication:
Social media has allowed for mass cultural exchange and intercultural communication. As different cultures have different value systems, cultural themes, grammar, and world views, they also communicate differently.The emergence of social media platforms fused together different cultures and their communication methods, blending together various cultural thinking patterns and expression styles
Social media has affected the way youth communicate, by introducing new forms of language. Abbreviations have been introduced to cut down on the time it takes to respond online. The commonly known “LOL” has become globally recognized as the abbreviation for “laugh out loud” thanks to social media.
Social media has offered a new platform for peer pressure with both positive and negative communication. From Facebook comments to likes on Instagram, how the youth communicate, and what is socially acceptable is now heavily based on social media.Social media does make kids and young adults more susceptible to peer pressure. The American Academy of Pediatrics has also shown that bullying, the making of non-inclusive friend groups, and sexual experimentation have increased situations related to cyberbullying, issues with privacy, and the act of sending sexual images or messages to someone’s mobile device. On the other hand, social media also benefits the youth and how they communicate. Adolescents can learn basic social and technical skills that are essential in society. Through the use of social media, kids and young adults are able to strengthen relationships by keeping in touch with friends and family, make more friends, and participate in community engagement activities and services.
Deceased users:
Social media content, like most content on the web, will continue to persist unless the user deletes it. This brings up the inevitable question of what to do once a social media user dies, and no longer has access to their content.As it is a topic that is often left undiscussed, it is important to note that each social media platform, e.g., Twitter, Facebook, Instagram, LinkedIn, and Pinterest, has created its own guidelines for users who have died.In most cases on social media, the platforms require a next-of-kin to prove that the user is deceased, and then give them the option of closing the account or maintaining it in a ‘legacy’ status. Ultimately, social media users should make decisions about what happens to their social media accounts before they pass, and make sure their instructions are passed on to their next-of-kin
Even though the Internet is still a young technology, it’s hard to imagine life without it now. Every year, engineers create more devices to integrate with the Internet. This network of networks crisscrosses the globe and even extends into space. But what makes it work?
To understand the Internet, it helps to look at it as a system with two main components. The first of those components is hardware. That includes everything from the cables that carry terabits of information every second to the computer sitting in front of you.
Other types of hardware that support the Internet include routers, servers, cell phone towers, satellites, radios, smartphones and other devices. All these devices together create the network of networks. The Internet is a malleable system — it changes in little ways as elements join and leave networks around the world. Some of those elements may stay fairly static and make up the backbone of the Internet. Others are more peripheral.
These elements are connections. Some are end points the computer, smartphone or other device you’re using to read this may count as one. We call those end points clients. Machines that store the information we seek on the Internet are servers. Other elements are nodes which serve as a connecting point along a route of traffic. And then there are the transmission lines which can be physical, as in the case of cables and fiber optics, or they can be wireless signals from satellites, cell phone or 4G towers, or radios.
All of this hardware wouldn’t create a network without the second component of the Internet: the protocols. Protocols are sets of rules that machines follow to complete tasks. Without a common set of protocols that all machines connected to the Internet must follow, communication between devices couldn’t happen. The various machines would be unable to understand one another or even send information in a meaningful way. The protocols provide both the method and a common language for machines to use to transmit data.
A Matter of Protocols:
You’ve probably heard of several protocols on the Internet. For example, hypertext transfer protocol is what we use to view Web sites through a browser that’s what the http at the front of any Web address stands for. If you’ve ever used an FTP server, you relied on the file transfer protocol. Protocols like these and dozens more create the framework within which all devices must operate to be part of the Internet.
Two of the most important protocols are the transmission control protocol (TCP) and the Internet protocol (IP). We often group the two together — in most discussions about Internet protocols you’ll see them listed as TCP/IP.
What do these protocols do? At their most basic level, these protocols establish the rules for how information passes through the Internet. Without these rules, you would need direct connections to other computers to access the information they hold. You’d also need both your computer and the target computer to understand a common language.
You’ve probably heard of IP addresses. These addresses follow the Internet protocol. Each device connected to the Internet has an IP address. This is how one machine can find another through the massive network.
The version of IP most of us use today is IPv4, which is based on a 32-bit address system. There’s one big problem with this system: We’re running out of addresses. That’s why the Internet Engineering Task Force (IETF) decided back in 1991 that it was necessary to develop a new version of IP to create enough addresses to meet demand. The result was IPv6, a 128-bit address system. That’s enough addresses to accommodate the rising demand for Internet access for the foreseeable future [source: Opus One].
When you want to send a message or retrieve information from another computer, the TCP/IP protocols are what make the transmission possible. Your request goes out over the network, hitting domain name servers (DNS) along the way to find the target server. The DNS points the request in the right direction. Once the target server receives the request, it can send a response back to your computer. The data might travel a completely different path to get back to you. This flexible approach to data transfer is part of what makes the Internet such a powerful tool.
Packet, Packet, Who’s Got the Packet?
What is the Internet?
The internet is composed of computer networks that allow users to access information from other computers (provided that they have permission to do so). The internet often uses various protocols such as TCP/IP to make this communication possible.
What are the main features of the internet?
One of the main features of the internet is accessibility. Anyone with access to a computer and a broadband connection can gain access to the internet without restriction. The internet also happens to be low cost and compatible with most platforms.
How does data move through the Internet?
Data is chopped into packets. These packets move through an ISP. The ISP routes the request to a server further up the chain on the internet. Eventually, the request will hit a domain name server (DNS). This server will look for a match for the domain name you’ve typed in (such as http://www.howstuffworks.com). If it finds a match, it will direct your request to the proper server’s IP address. Packets have headers and footers that tell computers what’s in the packet and how the information fits with other packets to create an entire file. Each packet travels back up the network and down to your computer.
How much data is on the internet?
Studies by PwC found that the internet had reached about 4.4 ZB (zettabytes) of data by 2019. (A zettabyte is 1,073,741,824 terabytes.) Most of this data is held by service companies like Amazon, Microsoft, Facebook, and Google.
It is a satellite-based global internet system that SpaceX has been building for years to bring internet access to underserved areas of the world. The idea is to beam high-speed, low-latency broadband internet to remote areas.
Starlink is a satellite internet constellation operated by SpaceX providing satellite Internet access to most of the Earth. The constellation consists of over 1600 satellites in mid-2021, and will eventually consist of many thousands of mass-produced small satellites in lowEarth orbit (LEO), which communicate with designated ground transceivers. While the technical possibility of satellite internet service covers most of the global population, actual service can be delivered only in countries that have licensed SpaceX to provide service within any specific national jurisdiction. As of November 2021, the beta service offering is available in 20 countries.
How many satellites are part of this constellation?
According to a recent Bloomberg report, SpaceX’s Starlink unit has deployed more than 1,700satellites to date in low-earth orbit. This number could eventually reach 30,000 if it receives the necessary regulatory approvals and market demand warrants.
How does it work?
There are no ground-based internet cables at play here. These satellites beam information through space. It travels 47% faster than fibre optics cable, according to Space.com. On the ground, these signals are received through a dish, which is also connected to a WiFi router.
The SpaceX satellite development facility in Redmond, Washington houses the Starlink research, development, manufacturing, and orbit control teams. The cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in May 2018 to be at least US$10 billion.
Early-stage planning began in 2014, with product development occurring in earnest by 2017. Two prototype test-flight satellites were launched in February 2018. Additional test satellites and 60 operational satellites were deployed in May 2019.SpaceX launches up to 60 satellites at a time, aiming to deploy 1,584 of the 260 kg (570 lb) spacecraft to provide near-global service by late 2021 or 2022.
On 15 October 2019, the United States Federal Communications Commission (FCC) submitted filings to the International Telecommunication Union (ITU) on SpaceX’s behalf to arrange spectrum for 30,000 additional Starlink satellites to supplement the 12,000 Starlink satellites already approved by the FCC.By 2021, SpaceX had entered into agreements with Google Cloud Platform and Microsoft Azure to provide on-ground compute and networking services for Starlink.
Astronomers have raised concerns about the constellations’ effect on ground-based astronomy and how the satellites will add to an already jammed orbital environment. SpaceX has attempted to mitigate these concerns by implementing several upgrades to Starlink satellites aimed at reducing their brightness during operation. The satellites are equipped with krypton-fueled Hall thrusters which allow them to de-orbit at the end of their life. Additionally, the satellites are designed to autonomously avoid collisions based on uplinked tracking data.
What do the satellites look like?
Each satellite in the Starlink project weighs just 573 pounds (260kg). The body of each satellite is flat, and up to 60 of them can fit into one of SpaceX’s Falcon 9 rockets. Once put in orbit, a single large solar array comes out to power the satellite. The central portion includes four powerful antennas for internet transmissions. Each satellite relies on a set of lasers to connect with four others in orbit. Finally, they have ion thrusters that use krypton gas. This allows them to stay in orbit longer, even at these lower distances from Earth.
How many satellites have launched so far?
SpaceX launched its first test satellites in 2018. This was followed by the first official60 satellites for the service in 2019. The most recent launch took place in mid-November 2021, with further launches planned for each month of the year. As of this writing, SpaceX has put up about 1,844 satellites into orbit. It’s well beyond the initial projection of 1,440 satellites, and that number means that SpaceX has completed its first “shell” of satellites.
How much will Starlink internet access cost? In a CNN article, an email reportedly from Starlink is inviting people to try out the service. The email claims that it will cost $499 for a one-time cost for the ground hardware and $99 a month for the basic internet service. Starlink has recently developed a new dish that is smaller and lighter than before, named Dishy McFlatface. However, it still costs $499.
By comparison, the HughesNet service costs as much as $150 a month, with a 50GB high-speed dataplan (at 25Mbps) and horrible latency that makes gaming impossible, and even tasks like streaming can be quite the chore.
In a CNN article, an email reportedly from Starlink is inviting people to try out the service. The email claims that it will cost $499 for a one-time cost for the ground hardware and $99 a month for the basic internet service. Starlink has recently developed a new dish that is smaller and lighter than before, named Dishy McFlatface. However, it still costs $499.
By comparison, the HughesNet service costs as much as $150 a month, with a 50GB high-speed dataplan (at 25Mbps) and horrible latency that makes gaming impossible, and even tasks like streaming can be quite the chore.
A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth’s. Communications satellites are used for television, telephone, radio, internet, and military applications. As of 1 January 2021, there are 2,224 communications satellites in Earth orbit. Most communications satellites are in geostationary orbit 22,300 miles(35,900 km) above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite.
The high frequency radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or “bands” certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference.
In October 1945, Arthur C. Clarke published an article titled “Extraterrestrial Relays” in the British magazine Wireless World.The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits for the purpose of relaying radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term ‘Clarke Belt’ is employed as a description of the orbit.
The first artificial Earth satellite was Sputnik 1 which was put into orbit by the Soviet Union on October 4, 1957. It was developed by Mikhail Tikhonravov and Sergey Korolev, building on work by Konstantin Tsiolkovsky.Sputnik 1 was equipped with an on-board radio-transmitter that worked on two frequencies of 20.005 and 40.002 MHz, or 7 and 15 meters wavelength. The satellite was not placed in orbit for the purpose of sending data from one point on earth to another; the radio transmitter was meant to study the properties of radio wave distribution throughout the ionosphere. The launch of Sputnik 1 was a major step in the exploration of space and rocket development, and marks the beginning of the Space Age.
Satellite orbits :
Communications satellites usually have one of three primary types of orbit, while other orbital classifications are used to further specify orbital details. MEO and LEO are non-geostationary orbit (NGSO).
Geostationary satellites have a geostationary orbit (GEO), which is 22,236 miles (35,785 km) from Earth’s surface. This orbit has the special characteristic that the apparent position of the satellite in the sky when viewed by a ground observer does not change, the satellite appears to “stand still” in the sky. This is because the satellite’s orbital period is the same as the rotation rate of the Earth. The advantage of this orbit is that ground antennas do not have to track the satellite across the sky, they can be fixed to point at the location in the sky the satellite appears.
Medium Earth orbit (MEO) satellites are closer to Earth. Orbital altitudes range from 2,000 to 36,000 kilometres (1,200 to 22,400 mi) above Earth. The region below medium orbits is referred to as low Earth orbit (LEO), and is about 160 to 2,000 kilometres (99 to 1,243 mi) above Earth.
As satellites in MEO and LEO orbit the Earth faster, they do not remain visible in the sky to a fixed point on Earth continually like a geostationary satellite, but appear to a ground observer to cross the sky and “set” when they go behind the Earth beyond the visible horizon. Therefore, to provide continuous communications capability with these lower orbits requires a larger number of satellites, so that one of these satellites will always be visible in the sky for transmission of communication signals. However, due to their relatively small distance to the Earth their signals are stronger.
Low Earth orbit (LEO)
A low Earth orbit (LEO) typically is a circular orbit about 160 to 2,000 kilometres (99 to 1,243 mi) above the earth’s surface and, correspondingly, a period (time to revolve around the earth) of about 90 minutes.
Because of their low altitude, these satellites are only visible from within a radius of roughly 1,000 kilometres (620 mi) from the sub-satellite point. In addition, satellites in low earth orbit change their position relative to the ground position quickly. So even for local applications, many satellites are needed if the mission requires uninterrupted connectivity.
Low-Earth-orbiting satellites are less expensive to launch into orbit than geostationary satellites and, due to proximity to the ground, do not require as high signal strength (signal strength falls off as the square of the distance from the source, so the effect is considerable). Thus there is a trade off between the number of satellites and their cost.
In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions.
Satellite constellation:
A group of satellites working in concert is known as a satellite constellation. Two such constellations, intended to provide satellite phone services, primarily to remote areas, are the Iridium and Globalstar systems. The Iridium system has 66 satellites.
It is also possible to offer discontinuous coverage using a low-Earth-orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada’s CASSIOPE communications satellite. Another system using this store and forward method is Orbcomm.
Medium Earth orbit (MEO):
A medium Earth orbit is a satellite in orbit somewhere between 2,000 and 35,786 kilometres (1,243 and 22,236 mi) above the earth’s surface. MEO satellites are similar to LEO satellites in functionality. MEO satellites are visible for much longer periods of time than LEO satellites, usually between 2 and 8 hours. MEO satellites have a larger coverage area than LEO satellites. A MEO satellite’s longer duration of visibility and wider footprint means fewer satellites are needed in a MEO network than a LEO network. One disadvantage is that a MEO satellite’s distance gives it a longer time delay and weaker signal than a LEO satellite, although these limitations are not as severe as those of a GEO satellite.
Like LEOs, these satellites do not maintain a stationary distance from the earth. This is in contrast to the geostationary orbit, where satellites are always 35,786 kilometres (22,236 mi) from the earth.
Typically the orbit of a medium earth orbit satellite is about 16,000 kilometres (10,000 mi) above earth. In various patterns, these satellites make the trip around earth in anywhere from 2 to 8 hours.
Ozone is a highly reactive form of oxygen. An ozone molecule is composed of three oxygen atoms (O3), instead of the two oxygen atoms in the molecular oxygen (O2) that we need in order to survive. In the upper atmosphere (stratosphere), the protective ozone layer is beneficial to people because it shields us from the harmful effects of ultra-violet radiation. However, ozone in the lower atmosphere (troposphere) is a powerful oxidizing agent that can damage human lung tissue and the tissue found in the leaves of plants. For more information about ozone’s effects on humans, refer to the EPA brochure Ozone and Your Health.
Ozone Sources :
Ozone is formed in the lower atmosphere primarily by nitrogen oxides (NOx) reacting with volatile organic compounds (VOCs) on warm, sunny days. Nitrogen oxides are released into the atmosphere as a by-product of any combustion.
For example, nitrogen oxides are released from the burning of vegetation during a fire. However, internal combustion engines (especially automobiles) and coal-fired power plants are the main sources of nitrogen oxides in the eastern United States. VOCs, or hydrocarbons, also come from man-made sources such as cars, service stations, dry cleaners, and factories and from natural sources such as trees and other vegetation. In fact, the main source of VOCs, in the southeastern United States, is from gases released by trees and other vegetation.
Patterns of Ozone Concentrations and Exposure:
Ozone exposures are usually greatest close to large urban areas like Dallas, Texas; or Atlanta, Georgia. Ozone exposures are higher in these major cities because there are more cars, industry and other nitrogen oxide emissions sources then rural areas. Ozone concentrations can increase considerably on hot-sunny days when there is a stagnant air mass (i.e. little to no winds) present. Therefore, ozone is primarily a problem during the summer months, when heat and sunlight are more intense. Furthermore, the ozone formed in cities, or the nitrogen oxides originating in cities can be transported long distances into the rural areas. For example, in western North Carolina, the high elevations above 4000 feet have greater ozone exposures than nearby low-elevation areas. For example, the figure below shows the average ozone concentration for each hour of the day for a low elevation and a high elevation ozone-monitoring site. The low elevation site is adjacent to Asheville, North Carolina (called Bent Creek), and the high elevation site is near Shining Rock Wilderness. It is noteworthy that these two sites are about 15 miles apart, and separated by about 3000 feet in elevation.
Average ozone concentrations : Average ozone concentrations for each hour of the day (April through October 1998) for a low elevation (top) and high elevation (bottom) sites. The low elevation site has a diurnal pattern in the ozone exposure and also shows that at lower elevations the ozone exposures are less than the ozone exposures found at high elevations. Results were produced using the Ozone Calculator.
The Bent Creek data shows a typical pattern (called a diurnal pattern) of ozone concentrations throughout the day (Berry, C.R., 1964). Ozone concentrations begin to rise in the morning and then decrease after the sun sets in the evening. Remember the recipe to form ozone is on warm sunny days, nitrogen oxides react with the VOCs. One pattern the Bent Creek data shows (right) is the ozone concentrations increase as the solar radiation and temperatures increase during the day. The Bent Creek data also reflects people’s daily activities. Typically, electrical generation (a major source of nitrogen oxides) increases in the morning as people get ready for work, and remains high on hot days in order to provide electricity to cool people’s homes and businesses. Also, when people drive to work each day they release nitrogen oxides from the tailpipes of their automobiles. The large amount of nitrogen oxides released early in the day contributes to recipe that forms ozone. The combination of a favorable environment and high nitrogen oxide emissions makes high ozone concentrations during the day.
Conversely, later in the day, many people drive home from work and electrical demand remains high on the hot days – thus there are still large amounts of nitrogen oxides released into the atmosphere. Solar radiation declines until sunset and the temperature also decreases. As nightfall approaches there is a lower likelihood that ozone will form because there is not enough sunlight (ultraviolet radiation) to cause the reactions necessary to form ozone. The nitrogen oxide emissions then serve an interesting role due to their abundance. Instead of contributing to ozone formation, the nitrogen oxides react with the ozone present in the atmosphere and cause a reduction of ozone concentrations during the nighttime. This occurs because nitrogen oxide molecules, in the absence of heat and strong sunlight, remove the third oxygen atom from the unstable ozone molecule.
In mountain valleys, such as occur near Bent Creek, ozone-forming pollution comes from both local and out-of-state sources. Winds can carry ozone formed in urban areas long distances to surrounding rural areas. Much of the ozone pollution at high elevations in the mountains of Western North Carolina is transported by winds from other states. The results from the high elevation ozone monitoring site near Shining Rock Wilderness (figure above) show ozone concentrations do not change throughout the day and that average concentrations are greater than at the Bent Creek site. Consequently, people and vegetation at higher elevations are exposed to more ozone then people and vegetation at low elevations.
Effect on Plants:
These blackberry plants near Shining Rock Wilderness had severe ozone symptoms present in mid-August 1997.Ozone effects on plants are most pronounced when soil moisture and nutrients are adequate and ozone concentrations are high. Under good soil moisture and nutrient conditions the ozone will enter through openings into the leaf and damage the cells that produce the food for the plants. Once the ozone is absorbed into the leaf, some plants spend energy to produce bio-chemicals that can neutralize a toxic effect from the ozone. Other plants will suffer from a toxic effect, and growth loss and/or visible symptoms may occur. The presence of ozone in an area can be detected when consistent and known symptoms are observed on the upper-leaf surface of a sensitive plant species.
For example, some air specialists use blackberry plants as a “bio-indicator” of ground level ozone. The photograph to the right shows the severe reddening of the blackberry foliage near Shining Rock Wilderness in western North Carolina when both adequate soil moisture and high ozone concentrations were present.
The presence of ozone symptoms is not an accurate indicator of how much growth loss has occurred to a sensitive plant from ozone exposure. Therefore, some air resource specialists rely upon measurements taken with ozone monitoring equipment in order to predict if growth loss has occurred. Ozone monitors provides over 4000 ozone readings from April through October. Researchers and technical specialists have examined ways to summarize and use this extensive information. The Ozone Calculator is one tool that has been developed to estimate if ozone exposures recorded at a monitoring site could cause a growth loss to the vegetation.
Exposure Indices :
There are two important statistics used to estimate the growth loss to vegetation when summarizing data from an ozone monitor. The N100 statistic is the number of hours when the measured ozone concentration is greater than or equal to 0.100 parts per million (ppm). Experimental trials with a frequent number of peaks (hourly averages greater than or equal to 0.100 ppm) have been demonstrated to cause greater growth loss to vegetation than trials with no peaks in the exposure regime (Hogsett et al., 1985; Musselman et al., 1983; and Musselman et al., 1986). For this reason, the W126 (Lefohn and Runeckles, 1987) was developed as a biologically meaningful way to summarize hourly average ozone data. The W126 places a greater weight on the measured values as the concentrations increase. Thus, it is possible for a high W126 value to occur with few to no hours above 0.100 ppm. Therefore, it is also necessary to determine the number of hours the ozone concentrations are greater than or equal to 0.100 ppm. It should also be noted the lack of N100 values does not mean ozone symptoms will not be present when field surveys are conducted.
When rain falls onto the earth, it just doesn’t sit there, it starts moving according to the laws of gravity. A portion of the precipitation seeps into the ground to replenish Earth’s groundwater. Most of it flows downhill as runoff. Runoff is extremely important in that not only does it keep rivers and lakes full of water, but it also changes the landscape by the action of erosion. Flowing water has tremendous power it can move boulders and carve out canyons; check out the Grand Canyon!
Runoff of course occurs during storms, and much more water flows in rivers (and as runoff) during storms. For example, in 2001 during a major storm at PeachtreeCreek in Atlanta, Georgia, the amount of water that flowed in the river in one day was 7 percent of all the streamflow for the year.
Some definitions of runoff:.
1. That part of the precipitation, snow melt, or irrigation water that appears in uncontrolled (not regulated by a dam upstream) surface streams, rivers, drains or sewers. Runoff may be classified according to speed of appearance after rainfall or melting snow as direct runoff or base runoff, and according to source as surface runoff, storm interflow, or groundwater runoff.
2. The sum of total discharges described in (1), above, during a specified period of time.
3. The depth to which a watershed (drainage area) would be covered if all of the runoff for a given period of time were uniformly distributed over it.
Meteorological factors affecting runoff:
Type of precipitation (rain, snow, sleet, etc.) * Rainfall intensity * Rainfall amount * Rainfall duration * Distribution of rainfall over the watersheds * Direction of storm movement Antecedent precipitation and resulting soil moisture *Other meteorological and climatic conditions that affect evapotranspiration, such as temperature, wind, relative humidity, and season.
Physical characteristics affecting runoff:
* Land use * Vegetation * Soil type * Drainage area * Basin shape * Elevation * Slope * Topography * Direction of orientation * Drainage network patterns * Ponds, lakes, reservoirs, sinks, etc. in the basin, which prevent or alter runoff from continuing downstream
Runoff and water quality :
A significant portion of rainfall in forested watersheds is absorbed into soils (infiltration), is stored as groundwater, and is slowly discharged to streams through seeps and springs. Flooding is less significant in these more natural conditions because some of the runoff during a storm is absorbed into the ground, thus lessening the amount of runoff into a stream during the storm.
As watersheds are urbanized, much of the vegetation is replaced by impervious surfaces, thus reducing the area where infiltration to groundwater can occur. Thus, more stormwater runoff occurs—runoff that must be collected by extensive drainage systems that combine curbs, storm sewers (as shown in this picture), and ditches to carry stormwater runoff directly to streams. More simply, in a developed watershed, much more water arrives into a stream much more quickly, resulting in an increased likelihood of more frequent and more severe flooding.
What if the street you live on had only a curb built around it, with no stormwater intake such as the one pictured here. Any low points in your street would collect water when it rained. And if your street was surrounded by houses with yards sloping uphill, then all the runoff from those yards and driveways would collect in a lake at the bottom of the street.
A storm sewer intake such as the one in this picture is a common site on almost all streets. Rainfall runoff, and sometimes small kids’ toys left out in the rain, are collected by these drains and the water is delivered via the street curb or drainage ditch alongside the street to the storm-sewer drain to pipes that help to move runoff to nearby creeks and streams. ; storm sewers help to prevent flooding on neighborhood streets.
Drainage ditches to carry stormwater runoff to storage ponds are often built to hold runoff and collect excess sediment in order to keep it out of streams.
Runoff from agricultural land (and even our own yards) can carry excess nutrients, such as nitrogen and phosphorus into streams, lakes, and groundwater supplies. These excess nutrients have the potential to degrade water quality.
Why might stormwater runoff be a problem?
As it flows over the land surface, stormwater picks up potential pollutants that may include sediment, nutrients (from lawn fertilizers), bacteria (from animal and human waste), pesticides (from lawn and garden chemicals), metals (from rooftops and roadways), and petroleum by-products (from leaking vehicles). Pollution originating over a large land area without a single point of origin and generally carried by stormwater is considered non-point pollution. In contrast, point sources of pollution originate from a single point, such as a municipal or industrial discharge pipe. Polluted stormwater runoff can be harmful to plants, animals, and people.
Runoff can carry a lot of sediment
When storms hit and streamflows increase, the sediment moved into the river by runoff can end up being seen from hundreds of miles up by satellites. The right-side pictures shows the aftermath of Hurricane Irene in Florida in October 1999. Sediment-filled rivers are dumping tremendous amounts of suspended sediment into the Atlantic Ocean. The sediment being dumped into the oceans has an effect on the ecology of the oceans, both in a good and bad way. And, this is one of the ways that the oceans have become what they are: salty.
Florida, Oct. 14, 1999. When Hurricane Irene passed over Florida in 1999, the heavy rainfall over land caused extensive amounts of runoff that first entered Florida’s rivers which then dumped the runoff water, containing lots of sediment, into the Atlantic Ocean.
Florida, Dec. 16, 2002. The east coast of Florida is mostly clear of sediment from runoff. The shallow coastal waters to the west of Florida are very turbid (sediment-filled), perhaps from a storm that passed over a few days earlier.
Chandrayaan-1,India’s first mission to Moon, was launched successfully on October 22, 2008 from SDSC SHAR,Sriharikota. The spacecraft was orbiting around the Moon at a height of 100 km from the lunar surface for chemical, mineralogical and photo-geologic mapping of the Moon. The spacecraft carried 11 scientific instruments built in India, USA, UK, Germany, Sweden and Bulgaria.
After the successful completion of all the major mission objectives, the orbit has been raised to 200 km during May 2009. The satellite made more than 3400 orbits around the moon and the mission was concluded when the communication with the spacecraft was lost on August 29, 2009.
The idea of undertaking an Indian scientific mission to Moon was initially mooted in a meeting of the Indian Academy of Sciences in 1999 that was followed up by discussions in the Astronautical Society of India in 2000.
Based on the recommendations made by the learned members of these forums, a National Lunar Mission Task Force was constituted by the Indian Space Research Organisation (ISRO). Leading Indian scientists and technologists participated in the deliberations of the Task Force that provided an assessment on the feasibility of an Indian Mission to the Moon as well as dwelt on the focus of such a mission and its possible configuration.
After detailed discussions, it was unanimously recommended that India should undertake the Mission to Moon, particularly in view of the renewed international interest in moon with several exciting missions planned for the new millennium. In addition, such a mission could provide the needed thrust to basic science and engineering research in the country including new challenges to ISRO to go beyond the Geostationary Orbit. Further, such a project could also help bringing in young talents to the arena of fundamental research. The academia would also find participation in such a project intellectually rewarding.
Subsequently, Government of India approved ISRO’s proposal for the first Indian Moon Mission, called Chandrayaan-1 in November 2003.
The Chandrayaan-1 mission performed high-resolution remote sensing of the moon in visible, near infrared (NIR),lowenergy X-rays and high-energy X-ray regions. One of the objectives was to prepare a three-dimensional atlas (with high spatial and altitude resolution) of both near and far side of the moon. It aimed at conducting chemical and mineralogical mapping of the entire lunar surface for distribution of mineral and chemical elements such as Magnesium, Aluminium, Silicon, Calcium, Iron and Titanium as well as high atomic number elements such as Radon, Uranium &Thorium with high spatial resolution.
Various mission planning and management objectives were also met. The mission goal of harnessing the science payloads, lunar craft and the launch vehicle with suitable ground support systems including Deep Space Network(DSN) station were realised, which were helpful for future explorations like the Mars Orbiter Mission. Mission goals like spacecraft integration and testing, launching and achieving lunar polar orbit of about 100 km, in-orbit operation of experiments, communication/ telecommand, telemetry data reception, quick look data and archival for scientific utilisation by scientists were also met.
PSLV-C11 PSLV-C11, chosen to launch Chandrayaan-1 spacecraft, was an updated version of ISRO’s Polar Satellite Launch Vehicle standard configuration. Weighing 320 tonne at lift-off, the vehicle used larger strap-on motors (PSOM-XL) to achieve higher payload capability.
PSLV is the trusted workhorse launch Vehicle of ISRO. During September 1993- April 2008 period, PSLV had twelve consecutively successful launches carrying satellites to Sun Synchronous, Low Earth and Geosynchronous Transfer Orbits. On October 22, 2008, its fourteenth flight launched Chandrayaan-1 spacecraft.
By mid 2008, PSLV had repeatedly proved its reliability and versatility by launching 29 satellites into a variety of orbits. Of these, ten remote sensing satellites of India, an Indian satellite for amateur radiocommunications, a recoverable Space Capsule (SRE-1) and fourteen satellites from abroad were put into polar Sun Synchronous Orbits (SSO) of 550-820 km heights. Besides, PSLV has launched two satellites from abroad into Low Earth Orbits of low or medium inclinations. This apart, PSLV has launched KALPANA-1, a weather satellite of India, into Geosynchronous Transfer Orbit (GTO).
PSLV was initially designed by ISRO to place 1,000 kg class Indian Remote Sensing (IRS) satellites into 900 km polar SunSynchronous Orbits. Since the first successful flight in October 1994, the capability of PSLV was successively enhanced from 850 kg to 1,600 kg. In its ninth flight on May 5, 2005 from the Second Launch Pad (SLP), PSLV launched ISRO’s remote sensing satellite,1,560 kg CARTOSAT-1 and the 42 kg Amateur Radio satellite, HAMSAT, into a 620 km polar Sun Synchronous Orbit. The improvement in the capability over successive flights has been achieved through several means. They include increased propellant loading in the stage motors, employing composite material for the satellite mounting structure and changing the sequence of firing of the strap-on motors.
Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram, designed and developed PSLV-C11. ISRO Inertial Systems Unit (IISU) at Thiruvananthapuram developed the inertial systems for the vehicle. Liquid Propulsion Systems Centre (LPSC), also at Thiruvananthapuram, developed the liquid propulsion stages for the second and fourth stages of PSLV-C11 as well as reaction control systems. SDSC SHAR processed the solid motors and carries out launch operations. ISRO Telemetry, Tracking and Command Network (ISTRAC) provide telemetry, tracking and command support during PSLV-C11’s flight
Who can submit a Proposal?
Proposals could be submitted by individuals or a group of scientists and academicians belonging to recognized institutions, universities, planetaria and government organisations of India. Only those having at least a minimum remaining service of four years before superannuation are eligible to lead the project as PI/Co-PI. The proposals must be forwarded through the Head of the Institution, with appropriate assurance for providing necessary facilities for carrying out the projects under this AO programme. The end….
A lunar eclipse occurs when the Moon moves into the Earth’s shadow.This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy) with Earth between the other two, and only on the night of a full moon. The type and length of a lunar eclipse depend on the Moon’s proximity to either node of its orbit.
Totality during the lunar eclipse of21 January 2019. Direct sunlight is being blocked by the Earth, and the only light reaching it is sunlight refracted by Earth’s atmosphere, producing a reddish color.
A totally eclipsed Moon is sometimes called a blood moon for its reddish color, which is caused by Earth completely blocking direct sunlight from reaching the Moon. The only light reflected from the lunar surface has been refracted by Earth’s atmosphere. This light appears reddish for the same reason that a sunset or sunrise does: the Rayleigh scattering of bluer light.
Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly 2 hours, while a total solar eclipse lasts only up to a few minutes at any given place, because the Moon’s shadow is smaller. Also unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions, as they are dimmer than the full Moon.
Types of Lunar Eclipse:
A schematic diagram of the shadow cast by Earth. Within the umbra, the central region, the planet totally shields direct sunlight. In contrast, within the penumbra, the outer portion, the sunlight is only partially blocked. (Neither the Sun, Moon, and Earth sizes nor the distances between the bodies are to scale.)
A total penumbral lunar eclipse dims the Moon in direct proportion to the area of the Sun’s disk covered by Earth. This comparison of the Moon (within the southern part of Earth’s shadow) during the penumbral lunar eclipse of January 1999 (left) and the Moon outside the shadow (right) shows this slight darkening.
Penumbral Lunar Eclipse
This occurs when the Moon passes through Earth’s penumbra. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when about 70% of the Moon’s diameter has immersed into Earth’s penumbra.A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the Moon lies exclusively within Earth’s penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.
Partial lunar eclipse
This occurs when only a portion of the Moon enters Earth’s umbra, while a total lunar eclipse occurs when the entire Moon enters the planet’s umbra. The Moon’s average orbital speed is about 1.03 km/s (2,300 mph), or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and the last contacts of the Moon’s limb with Earth’s shadow is much longer and could last up to 236 minutes.
Total lunar eclipse
This occurs when the moon falls entirely within the earth’s umbra. Just prior to complete entry, the brightness of the lunar limb– the curved edge of the moon still being hit by direct sunlight– will cause the rest of the moon to appear comparatively dim. The moment the moon enters a complete eclipse, the entire surface will become more or less uniformly bright. Later, as the moon’s opposite limb is struck by sunlight, the overall disk will again become obscured.
This is because as viewed from the Earth, the brightness of a lunar limb is generally greater than that of the rest of the surface due to reflections from the many surface irregularities within the limb: sunlight striking these irregularities is always reflected back in greater quantities than that striking more central parts, and is why the edges of full moons generally appear brighter than the rest of the lunar surface.
Central lunar eclipse
This is a total lunar eclipse during which the Moon passes through the centre of Earth’s shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.
The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse’s duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth’s umbra does not decrease appreciably within the changes in the Moon’s orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.
Selenelion
A selenelion or selenehelion, also called a horizontal eclipse, occurs where and when both the Sun and an eclipsed Moon can be observed at the same time. The event can only be observed just before sunset or just after sunrise, when both bodies will appear just above opposite horizons at nearly opposite points in the sky. A selenelion occurs during every total lunar eclipse it is an experience of the observer, not a planetary event separate from the lunar eclipse itself. Typically, observers on Earth located on high mountain ridges undergoing false sunrise or false sunset at the same moment of a total lunar eclipse will be able to experience it. Although during selenelion the Moon is completely within the Earth’s umbra, both it and the Sun can be observed in the sky because atmospheric refraction causes each body to appear higher (i.e., more central) in the sky than its true geometric planetary position.
Timing
The timing of total lunar eclipses is determined by what are known as its “contacts” (moments of contact with Earth’s shadow)
P1 (First contact): Beginning of the penumbral eclipse. Earth’s penumbra touches the Moon’s outer limb. U1 (Second contact): Beginning of the partial eclipse. Earth’s umbra touches the Moon’s outer limb. U2 (Third contact): Beginning of the total eclipse. The Moon’s surface is entirely within Earth’s umbra. Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth’s umbra. U3 (Fourth contact): End of the total eclipse. The Moon’s outer limb exits Earth’s umbra. U4 (Fifth contact): End of the partial eclipse. Earth’s umbra leaves the Moon’s surface. P4 (Sixth contact): End of the penumbral eclipse. Earth’s penumbra no longer makes contact with the Moon.
Danjon scale:
L = 0: Very dark eclipse. Moon almost invisible, especially at mid-totality. L = 1: Dark eclipse, gray or brownish in coloration. Details distinguishable only with difficulty. L = 2: Deep red or rust-colored eclipse. Very dark central shadow, while outer edge of umbra is relatively bright. L = 3: Brick-red eclipse. Umbral shadow usually has a bright or yellow rim. L = 4: Very bright copper-red or orange eclipse. Umbral shadow is bluish and has a very bright rim.
Chandrayaan –2 is India’s second lunar probe, and its first attempt to make a soft landing on the Moon. It has an Orbiter, which will go around the Moon for a year in an orbit of 100 km from the surface, and a Lander and a Rover that will land on the Moon. Once there, the Rover will separate from the Lander, and will move around on the lunar surface. Both the Lander and the Rover are expected to be active for one month.
CHANDRAYAAN BEGUN ITS JOURNEY: Chandrayaan-2 satellite had begun its journey towards the moon leaving the earth’s orbit in the dark hours on August 14, after a crucial maneuver called Trans Lunar Insertion (TLI) that was carried out by Isro to place the spacecraft on “Lunar Transfer Trajectory”
India’s Moon mission: Chandrayaan-2 will be a ground-breaking mission to the south pole of the moon and should land on a high plain between two craters, Manzinus C and Simpelius N, which are around 70° south.
India’s Geosynchronous Satellite Launch Vehicle, GSLV MkIII-M1 had successfully launched the 3,840-kg Chandrayaan-2 spacecraft into the earth’s orbit on July 22.
In a major milestone for India’s second Moon mission, the Chandrayaan-2 spacecraft had successfully entered the lunar orbit on August 20 by performing Lunar Orbit Insertion (LOI) maneuver. On August 22, Isro released the first image of the moon captured by Chandrayaan-2. On September 2, ‘Vikram’ successfully separated from the orbiter, following which two de-orbiting manoeuvres were performed to bring the lander closer to the Moon.
Vikram’ and ‘Pragyan’
As India attempted a soft landing on the lunar surface on September 7, all eyes were on the lander ‘Vikram’ and rover ‘Pragyan’.
The 1,471-kg ‘Vikram‘, named after Vikram Sarabhai, father of the Indian space programme, was designed to execute a soft landing on the lunar surface, and to function for one lunar day, which is equivalent to about 14 earth days.
Chandrayaan, which means “moon vehicle” in Sanskrit, exemplifies the resurgence of international interest in space. The US, China and private corporations are among those racing to explore everything from resource mining to extraterrestrial colonies on the moon and even Mars.
LAUNCHED IN: India’s second mission to the Moon, Chandrayaan-2 was launched on 22nd July 2019 from Satish Dhawan Space Center, Sriharikota. The Orbiter which was injected into a lunar orbit on 2nd Sept 2019, carries 8 experiments to address many open questions on lunar science.
India’s ambitious mission to land on the Moon failed. The Vikram lander, of the Chandrayaan 2 mission, crashed on the lunar surface on September 7, 2019, but it was only in December that scientists found it. Why did it take so long to find the lander?
There are quite a few technical reasons for that. Let’s start with a quick recap of what happened on the landing day.
who said three days after the landing day that they had spotted the lander. ISRO failed to show any pictures or provide location coordinates to the public despite the claims.
The statement is in fact only the third and the last time ISRO publicly spoke of the lander’s condition. However, it didn’t stop ISRO from coming out of the slumber and boasting that they found the lander first, i.e. before NASA did with help from Subramanian.
Going by the publicly available evidence, NASA found the Vikram lander on the Moon’s surface, not ISRO. And what does Chandrayaan 2’s landing failure mean for ISRO? Go back to the launch pad.
Pollution in ocean is a major problem that is affecting the ocean and the rest of the Earth, too.Pollution in the ocean directly affects ocean organisms and indirectly affects human health and resources.Oil spills, toxic wastes, and dumping of other harmful materials are all major sources of pollution in the ocean.
Marine Pollution:
Marine pollution is a combination of chemicals and trash,most of which comes from land sources and is washed or blown into the ocean.
CAUSES:
Some of the main causes for the marine pollution is as follows
• Ocean dumping
Land runoff
• Oil spills
• Littering
• Ocean mining
• Noise pollution
Ocean dumping:
Deliberate disposal of hazardous wastes at sea from vessels, aircraft, platforms or other human-made structures.
Land runoff:
Eighty percent of marine pollution comes from land run off.
Oil spills:
Contamination of seawater due to an oil pour, as a result of an accident or human error, is termed an oil spill.
Littering:
Marine litter is not only ugly it can harm ocean ecosystems wildlife, and humans. It can gure coral reels and bottom dwelling species and entangle or drown ocean wildlife. Some marine animals ingest smaller plastic particles and choke or starve
Ocean Mining (Deep Sea Mining):
Mining under the ocean for gold, silver, copper, cobalt,etc is another source for ocean pollution.Deep sea mining could even make climate change worse. The disruption caused by the machines may release carbon stored in deep sea sediments.
Noise Pollution in the ocean:
Ocean noise refers to sounds made by human activities that can interfere with or obscure the ability of marine animals to hear natural sounds in the ocean.
Devastating Effects of Ocean Pollution:
1. Effect of Toxic Wastes on Marine Animals:
The oil spilled in the ocean could get on to the gills and feathers of marine animals, which makes it difficult for them to move or fly properly or feed their children.
2. Disruption to the Cycle of Coral
Reefs:
Oil spill floats on the surface of the water and prevents sunlight from reaching to marine plants and affects the process of photosynthesis.
3. Depletes Oxygen Content in Water:
When oxygen levels go down, the chances of survival of marine animals like whales, turtles, sharks, dolphins, penguins for a long time also goes down.
4. Failure in the Reproductive System of Sea Animals
Chemicals from pesticides can accumulate in the fatty tissue of animals, leading to failure in their reproductive system.
5. Effect on Food Chain:
Chemicals used in industries are ingested into small animals in the ocean and are later eaten by large animals, which then affects the whole food chain.
6. Affects Human Health:
Animals from impacted food chain are then eaten by humans, which affects their health as toxins from these contaminated animals get deposited in the tissues of people and can lead to cancer, birth defects or long term health problems.
Solutions to Ocean Pollution:
1. Reducing the Use of Plastic Products
2. Use Reusable Bottles and Cutlery
3. Recycle Whatever You Can
REDUCE DISCHARGE OF SEWA IN THE OCEAN
4. Stop Littering the Beach, and Start
Cleaning It
5. Reducing the Use of Chemical Fertilize
6. Reducing the Energy Use
Conclusion
• We must help to stop ocean pollution, by recycling, using decomposable materials instead of plastic or glass to decrease our accumulating waste. Marine animals are suffering due to our actions, and if we do not put a halt to pollution soon, we too will suffer the consequences.
You must be logged in to post a comment.