By – Supriya


Thankyou!
By – Supriya


Thankyou!
By – Supriya


During school days, we read books are our best friends who will never walk away from us. Undoubtedly books are our friends. It provides knowledge in different fields and enhances our wisdom and intelligence. From class one onwards we all read books and many of us continuously read books until our death. To popularize reading habits this year 23 April 2022 is observed as World Book Day. This is also known as World Book and Copyright Day or International Day of the Book, which is an annual event organized by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to promote reading, publishing, and copyright
Each year, on 23 April, celebrations take place all over the world to recognize the scope of books – a link between the past and the future, a bridge between generations and across cultures. On this occasion, UNESCO and the international organizations representing the three major sectors of the book industry – publishers, booksellers, and libraries, select the World Book Capital for a year to maintain, through its own initiatives, the impetus of the Day’s celebrations. 23 April is a symbolic date in world literature as on this date several prominent authors, William Shakespeare, Miguel de Cervantes, and Inca Garcilaso de la Vega all passed away. So this, the date was a natural choice for UNESCO’s General Conference, held in Paris in 1995, to pay a worldwide tribute to books and authors on this date, encouraging everyone to access books (unesco.org/commemorations/worldbookday).
According to World Reading Habits in 2020, (geediting.com/world-reading-habits-2020), by conducting research studies across the world they have found that coronavirus has changed our reading habits. Some of the highlights of World Reading Habits in 2020 research inter alia are:
India reads more than any other country, followed by Thailand and China
Printed books continue to drive more revenue than eBooks or audiobooks. However, physical books sales did dip because of coronavirus (not surprisingly).
35% of the world read more due to coronavirus.
Also, World Reading Habits in 2020 have given ranks which may be seen as below:
1) India (10.42 hours spent in reading per person per week)
2) Thailand (9.24 hours spent in reading per person per week)
3) China (8.00 hours spent in reading per person per week)
4) Philippines (7.36 hours spent in reading per person per week)
5) Egypt (7.30 hours spent in reading per person per week)
19) Spain (5.48 hours spent in reading per person per week)
20) Canada (5.48 hours spent in reading per person per week)
21) Germany (5.42 hours spent in reading per person per week)
22) USA (5.42 hours spent in reading per person per week)
It is pertinent to mention that ranks have been given by World Reading Habits in 2020. But interesting points are India, Thailand, China, Philippines, and Egypt are in the first five but so-called developed countries such as Spain, Canada, Germany, and the USA are placed at a comparatively low level.
On this Day of 23 April, as a senior citizen, I suggest that parents should encourage their children to read more and more books including newspapers. Reading habit is the sine qua non for leading a balanced lifestyle.

Students who are music enthusiasts or just music listeners prefer to study with music. The music that they prefer depends on their taste and what they feel comfortable or non-disturbing during their study session. You can visit ragnarok.group for more.
Music can have both positive and negative effects on studying, depending on the student and the type of music. In this article, we will look at the best study music – what to listen to while studying.
Pros of Listening to Music While Studying
Music that is soothing and relaxing can help students to beat stress or anxiety while studying. It may improve focus on a task by providing motivation, improving mood, and aiding endurance. Sometimes, students have found that music helps them with memorization, likely by creating a positive mood, which indirectly boosts memory formation.
However, students who listen to music with lyrics while completing reading or writing tasks tend to be less efficient and come away having absorbed less information. Loud music can have adverse effects on reading comprehension and on mood, making the focus more difficult.
What Does Science Say?
The theory that listening to music, particularly classical music, makes people smarter, was developed in the early 1990s. It’s not clearly established, at least scientifically, that listening to classical music, or any music for that matter, actually makes a person smarter or more intelligent. Studies and research have found that depending on the type of music, it can actually help students to focus more while studying. In this post, we will look at the best music type or playlist that can help you study better.
So, What is the Best Music to Listen to?
Start With Something You Know
It’s important to remember that there’s music for every occasion and so the best music for concentration might not be your first choice on a normal day. Think about the sort of music that you play whilst online gaming – do you ever turn it down for when you need to concentrate? Perhaps you have a playlist that you switch to when you know that the gameplay will be confusing; that might just be your best study music.
If you’re able to start by finding one or two songs that really help you to focus then that can be a great place to start. Music plays a big role in all of our lives, so it’s key to find something that suits your own tastes. Working from the tracks you’ve found, try a search through an app like Spotify to help find more tracks that are similar. Their algorithms can see which tracks people listen to and match you up to new songs that you might enjoy.
Try Classical Music
Classical music is known for being both peaceful and harmonious, creating a calm and serene study environment for the listen. It’s recommended as one of the best studying genres for students because listeners report side effects like better mood and increased productivity. As far as side effects go, those aren’t too shabby!
Stay Away From Lyrics
Music with lyrics or basically songs are meant to be heard with certain amount of attention. This would make studying tedious as it interferes into your studies and would waste your time. It’s like trying doing both things at the same time and end up doing nothing.
Conclusion
Listening to music can calm you down, leading to more conscientious studying, elevating your mood, motivating you to stay focused and studying for longer periods of time. However, it really depends on what to listen to while studying.
At the end of the day, what actually matters is that whatever you’re listening to doesn’t distract you, calms you and truly puts your mind into study mode so that you can be productive and retain as much information as possible.

Focus on study can become all-consuming at exam time and nutrition might sometimes fall behind in the list of priorities. However, a healthy diet plays a vital role in achieving peak academic performance. So, here we present best foods to feed you brain during exam times which keep you in-track and help you prepare effectively.
An overall healthy diet is most important for keeping your body and brain nourished and ready to take on difficult tasks, research shows that certain foods may be especially important for brain health and promoting mental performance.
So the idea is to optimize healthy food intake through eating a range of foods from the five food groups:
These healthy meals and snacks suggested below are packed with nutrients that support brain function, provide a slow and sustained release of glucose and are rich in resistant starch which supports your gut microbiome and gut-brain axis.
A common mistake many make during this crucial period is to eat poorly and unhealthily. Junk food, lots of chocolate, energy drinks and crisps are often eaten in place of normal meals to “keep energy levels up”. However, this is not only harmful to your long-term health, but can also negatively affect your exam performance.
The best way to feed your brain is to eat a wide range of foods from all food groups. However, when you’re hitting the books, it can be a little tricky to put it into practice. So, we’ve come up with some easy meal-swaps to give you the best diet for studying that will get your brain humming in no time. Eating well-rounded meals most of the time will help you study better, and lead to better results, both in the short-term and the long-term. While many of the brain foods we’ve talked about have immediate results (like caffeine), the best results are the ones that show up over time, such as the slowing down of age-related cognitive decline, and the decreased likelihood of degenerative conditions like Alzheimer’s.

Have you ever been through the phase before a big presentation, seminar or a panel review when your heart is racing, your palms are damp and you’re starting to panic?
Everyone does. If they have not experienced it then they will or they’ve just living under a rock.
During that panic-phase you would be thinking about how to beat the stress so bad and would end up distracting yourself from the presentation which would stress you even more.
So, in this article we would like you to look into 5 tips which will help you to convert this adversity into opportunity!
The first and foremost way to reduce your anxiety before an event is to acknowledge it. Labeling it or acknowledging the stress will allow you to be more realistic and find a logical solution that works for you. It is like being more transparent to you about yourself which would make you a better judge and a critic and lets to tap into your true self.
2. Talk positively to yourself.
After acknowledging your fear, it is necessary to be positive towards yourself. Positive self-talk lightens your mood, creates effective mindspace which encourages productive thinking which will eventually generate an active mindset for the upcoming performance. In the minutes leading up to your presentation, say over and over within yourself, “You are a dynamic speaker!” “You are enthusiastic and engaging!” “You are prepared and confident!”
3.Take several deep belly breaths.
Since anxiety tightens the muscles in the chest and throat, it’s important to diminish that restricting effect with deep inhalations. It maximizes the amount of oxygen that flows to the lungs and brain; interrupts the adrenalin-pumping ‘fight or flight’ response; and triggers the body’s normal relaxation response.
4. Don’t pretend you’re not nervous.
It is a natural tendency to create an image of yourself to your peers a fear-less body-language. Doesn’t matter how bad you want to put that on it comes off as very superficial and makes you more tense as there is now something more on the list that you have to care about. In fact, your peers, to whom you are creating that image can easily find out that you are faking it. What a waste! Ain’t it.
5. Practice the first minute in your mind.
Whatever you’re planning to say as the captivating opener-a witty quotation, personal story, or startling statistic-rehearse the first few sentences several times. This makes your presentation more natural, less over-structured. This also gives a good headstart to all those who have the common ‘starting-problem’. This also gives you a good kick of confidence that compells you into giving more natural presentation.
Did you know that not only the way you think about yourself but also your performance in your studies, work etc. somehow depends on the clothing you wear? Yes, this might seem a little vague and you might say that the interdependence is very trivial.
But, research shows that the clothes you wear can actually change the way you perform.

This can be elucidated by the example of a play. The stage actors of the play rehearse on random clothes that they are comfortable wearing. But, when they rehearse with their costumes on, you get to see a stark difference in their performance. In comparison to all their previous rehearsals, this time you will see a slight confidence boost throughout the duration of their performance. This is because they experience the character or better yet they get into character more deeper than ever before for the role.
Apparel and presentation communicates volumes about you as a person. The question is not whether you care about fashion, it’s more about what you’re communicating intentionally or unconsciously through your fashion choices. You should be conscious about what you wear and don’t just like you are conscious about what you eat and what you don’t. This includes ignoring the fashion trends that do not fit you or does not make you comfortable just like the way you say no to the food that you are allergic or just intolerant to.
When you’re dressing or grooming, consider what it says about you and whether it’s in line with the message you want to communicate. There’s no right or wrong. It’s all about context. A tie can make you look reliable and rooted in tradition. This might be important at an investment firm, where clients want to know that you’re serious about stewarding their capital. But it can also come off as stuffy and resistant to change, which may be inappropriate for a tech startup.

Of course, dressing smart is also important for your confidence and sense of self-empowerment. But your style does more than just send messages, to your mind or to others. New research study shows it actually impacts how you think.
“The formality of clothing might not only influence the way others perceive a person, and how people perceive themselves, but could influence decision making in important ways through its influence on processing style” the study says.
The psychology behind is totally subconscious. A gut feeling, commonly called intuition or a first impression, is really part of the very fast-paced mental process of thin-slicing, which is when our brain process visual details instantaneously.
It’s how we continually judge books by their covers, all day, every day.
So choose your personal presentation with care. Presentation includes not only your clothes, but your accessories, hairstyle, fragrance, posture, body language, tone of voice, and the level of energy with which you move and speak. Think of the person that you need to be in any particular situation. Then dress, groom, and accessorize in a way that helps you mentally step into that personality.

Computer programming or coding is a very important skill in tech that engineering students, app developers, website developers, game developers etc. use in order to create what they intend to. Over the past couple of years, coding has progressed from a geeky hobby to career-creating skill. It is easy to learn, doesn’t take much time in learning, fun to apply and eventually grow step-by-step. The fact that literally anybody can learn to code surprises everyone who stumble upon it. Moreover, employers have shown a willingness to pay a premium for the work of employees with coding and programming ability.
You might be wondering if coding is something you should consider. So how to start to learn coding? How long will it take? How much time is less time? What exactly should be our goal?- are all the questions one goes through before starting to learn coding in the present time. So here we have ‘5 benefits of learning to code’ which will go through all the aforesaid questions and answer quite more than what comes in the general thought.
Coding helps us in understanding logic at a deep level and improves my problem-solving proficiency. In its most basic terms, is really just assigning a computer a task to do based on the logical guidelines you’ve outlined. Once you have got a really good grip on the coding language, it provides you an arsenal of tools which helps you to reciprocate your idea into a working code. This psychology improves problem solving as you approach the problem in a way that matches to your intellect in a certain field; that field being coding.
Hilary Bird, senior developer at Get Century Link says, “I can break problems down into small, separate parts and figure out how each is affecting the other,” she explains. “This helps me decide what area of the problem to focus on first.”
2. Great earning potential:
One of the most important and the strongest merits of learning to code is that it offers impeccable earning potential to programmers. Demand remains strong for coding-related jobs and this means that it has got a high probability that it pays really well. In the US, median annual salary information for these coding and programming-related professions is as follows:
To put it into perspective, the national average for all occupations in the US is around $39,810. As you can see, careers that involve some programming, coding or scripting skills tend to come with above-average salaries.
3. Career flexibility:
Learning to code can help open up new areas of opportunity in your career and ultimately make you a more flexible candidate in a rapidly-shifting digital economy. Even if your job doesn’t require you to have a deep understanding of coding or programming languages, it still helps because you’ll likely need to interact with another person who does. Learning to code, even as a hobby, can give you a common reference point and better understanding of those who tackle some of the more complex programming and coding roles out there.
4. Complements creativity:
Another benefit of learning to code is that it can help you showcase your creativity online. For example, with coding knowledge, you can create online blogs or complex websites and customize them to make your own rather than using pre-existing templates. This can be a great way to help you stand out when designing your online portfolio or for creating a strong visual identity for your brand as an aspiring business owner.
5. Coding is a universal language:
Much like mathematics, code is a universal language; it is the same across the world and does not need to be translated, unless different coding languages are used. When transferring jobs or moving to another country, language barriers can sometimes get in the way. However, since coding languages are the same globally, it is a skill you can carry with you to any country. Learning how to code can make you a highly employable individual and give you the ability to thrive in any environment.
All things considered, coding or computer programming can be regarded as one of the most basic skill that one should learn. Nowadays, coding has been incorporated in schools as a compulsory subject like mathematics which is a good sign as it provides school-going children an early-start into the programming world and can improve the condition of numerous fields where it is applied there on.
Electric Vehicles (EVs) are hailed and promoted by governments and car manufacturers as the technology decarbonizes the transport sector. In India, transportation sector emits an estimated 261 tonnes of CO2, of which 94.5% is contributed by road transport. According to World Health Organization (WHO), among the 20 most polluted cities in the world, 14 are Indian cities. Moreover, the fuel prices have increased sharply since the last year and are predicted to increase further because of the Russian invasion on Ukraine.
All these reasons have made people to turn to electric vehicles as the government has provided incentives to people by introducing subsidies. Under section 80EEB, a total tax exemption of up Rs 1,50,000 can be availed when paying off the EV loan. This tax exemption is available for both 4-wheeler and 2-wheeler EV purchases. So, the future looks bright for EVs; but this future will not arrive without an environmental cost. The market for EVs, in the developed world has increased rapidly in not less than a decade.

The EV manufacturers market them as ‘clean and green’ technology that hides the dark side of EVs, which consists of embodied emissions, lithium, cobalt and many sustainability and ethical issues.
While EVs produce no direct exhaust pipe emissions, the production, distribution and disposal of EVs are highly poisonous. The production of an EV involves many of the same polluting processes as an Internal Combustion (IC) engine but the only difference is that it uses lithium-ion and cobalt batteries to power the shafts. The processes involved in making EV batteries is where the ‘green’ image of EVs starts to fall apart.

The lithium-ion battery supply chain stretches from Europe to Latin America and Africa. Over half of the earth’s lithium resources are found in the so-called Lithium Triangle which spans across Bolivia, Argentina and Chile. Lithium mines have an environmental impact through their destructive extraction processes. Landscapes are dug up and scarred, habitats can be destroyed, and chemical runoff can cause severe water pollution. Mining for lithium also consumes a tremendous amount of water, about 2000000 litres per tonne of lithium. This has caused water shortages in Chile, leaving them no enough water for daily-public use.
Cobalt is another key ingredient for the lithium-ion battery. Cobalt resources are concentrated in the Democratic Republic of Congo where numerous cases of human rights abuses have been filed in the country’s cobalt mines: unmitigated health risks from the working environment, child labor etc. Battery sustainability issues do not end there. At their end-of-life, the batteries mostly end up in in landfills. No effective disposal technique is adopted because it is currently cheaper to extract new raw materials for batteries than it is to reuse old batteries.
Putting things into perspective, EVs are much better for the environment compared to IC engines as the decarbonize transport sector, but we should be careful about their framing as a solution. So, EVs cannot be regarded as the ‘perfect’ replacement for fossil fuel-run engines in contrast to what they are advertised of being but awareness calls for the fact that the nature is letting people choose between bad and worse at this point of time in history.
Are you feeling sad? Don’t worry, everyone feels the same way. So stop whimpering and move ahead.
Why are you so excited? Don’t get so excited otherwise, something wrong will happen.
Blah blah blah……..
Ok! So this may sound practically correct. But in my opinion, it’s not. Just cry as much as you want to. Feel that sadness, never let that sadness kill you from inside. I know that crying has never been acknowledged. We have been programmed to think like that since childhood. It’s not just sadness, pain that you should feel, but also happiness, excitement, and every other kind of feeling. I often hear these statements, don’t be overexcited, don’t cry so much, don’t be so emotional, etc. I still remember how I used to curb my feelings and many of you will be doing the same.

But yes there will be some people now who will argue. They will say that if you don’t control your feelings, your feelings are going to control you. All I want to say is feel that feeling till it dissipates. Allow yourself to feel whatever you want to irrespective of what others will think of you. You accomplished something, maybe a small achievement according to this pseudo world, but it may be great attainment according to you. So, if you are happy or excited about it just feel that never curb it thinking about what will people think of you. I know it’s difficult to do this for most of us because we have been programmed to this but just try it once. Forget this world and feel. Just feel.
WHY DO YOU IGNORE YOUR FEELINGS?
WHEN YOU FEEL YOUR FEELINGS THEN…

Candidates are chosen to be considered for Regional Rural Banks (RRBs) during this IBPS RRB selection process. The Regional Rural Banks are the scheduled Indian commercial banks, and the purpose of RRB is to offer banking services to mainly rural and semi-urban regions.
The recruitment is for Office Assistant (multipurpose) and Officer Scale I, II, and III. It is expected that the IBPS RRB vacancy for each post in the recruitment will be made public. Candidates are eager to learn about the state-specific job openings, and this huge gap creates great job prospects for hopefuls each year.
The Age Limit and Qualification
The minimum age is for IBPS RRB office assistants is between 18 and 28. For scale 1, it’s 18-30. For scale 2, 21-32. For scale 3, it’s 21-40. There will be a relaxation of age in different categories.
Qualifications for education are required for each post. For the post of office assistant, Bachelor’s degree in any stream from a government-recognized University would be enough. Language proficiency in the local language is an advantage, and they should possess a basic understanding of computers. IBPS RRB PO free mock test 2022 is here for your preparation.
For scale 1, work experience is not a requirement. A minimum of a bachelor’s degree in any field from a recognized institution is required. The first preference is given to applicants with degrees in Agriculture or Agriculture, Marketing, Horticulture, Forestry, Information Technology, Animal Husbandry, or Veterinarian Science Management and Agricultural Engineering and Pisciculture, Accounting, Law, or Economics. Language proficiency in the local language. Basic computer skills are common for everyone.

For scale 2, you must have, at the very least, a bachelor’s or master’s degree in one of the streams at an accredited university or equivalent, with at least a mark of 50% in your degree. Experience of two years as an officer at a financial institution or Bank is required.
For scale 3, Bachelor’s Degree in any discipline from a recognized university or equivalent, with minimum marks of 50% or more. Priority is given to candidates who hold a degree or diploma in banking, marketing, Finance, Agriculture, etc. Five years’ experience (minimum) as an officer in a financial institution or Bank.
The Process of Recruiting
This IBPS RRB Office Assistant selection procedure will include preliminary and main examinations. Candidates who pass the prelims exam will sit for the mains test, the last phase, and candidates will receive a provisional order based on the scores of the main exam.
Three stages are held when it comes to the IBPS RRB selection procedure for the Officer Scale I hire. Candidates who pass the preliminary test are invited to take the main examination. After the main exam, the candidates are then selected as the candidates for the interview phase. The interview has a total score of 100 marks. The last merit lists are made using the marks obtained from the main stage and interview stage in a ratio of 80:20.
The process for selection comprises two rounds to be eligible for the recruitment of IBPS RRB Scale II Officer and III. The first stage is a single-level test. Candidates who have passed the exam at a single level will be selected for the interview. The exam will be conducted with a total of 100 marks. The marks of the single-level exam and the interview are taken in the proportion of 80:20 to prepare the merit lists for final submission. You can opt for IBPS RRB PO free mock test 2022 here.

What comes to your mind when you come across the word ‘ROBOT’? If you think they are only the thing of space movies and science fiction novels, then think again. Robots are the best growing technological device in the world. Ranging from space exploration to entertainment , a robot’s performance is commendable.
The robot industry is growing very fast, providing us with various new technologies that can assist with household work,medical work,industrial work, space exploration and many other tasks. Robotic technology has brought a lot of changes around us and is continuing to impact the way we are living in this world. Robotics technology has transformed a lot since the past to the present scenario. It surrounds almost everyone in today’s world and affects both work and leisure activities.
Robotics was a sub-branch of mechanical, electrical and computer science engineering. But now with increasing demand for artificial intelligence, machine learning and automation. Robotics and automation has introduced itself as a branch of engineering.
Robotics deals with the design,construction, operation and application of robots and computer systems and their control and processing.
Robotic technology is becoming one of the fastest growing technologies in the world. Some of the leading robotics companies are ABB, Fanuc, KUKA, Yaskawa.Robots are versatile, they can perform many functions.They are multifunctional in today’s society.
Some of the projects that became famous all over the globe are the curiosity rover,sophia the human robo, DJI Mavic and Phantom,Robonaut-R2 and R5.The use of robotic technology has made a deep influence on the world in several ways. As technological advances continue, research design and building new robots serve various practical purposes, whether domestic, commercial, military or space.
Incredibly human race have made robots that can replace humans from working in hazardous conditions like defusing bombs, mining and exploring shipwrecks.Robots are very beneficial to human race. They are efficient, take less time to complete a task,save a lot of money and provide accurate and precise work.There are various industry segments which are making use of robotics to improve their production capabilities.
The craze of robots has spread so much that so many movies and series are also based on its theme. Some popular movies include Star Wars, Robocop, Ra one, Transformers etc.
The increasing popularity of robotics has resulted in the formation of the Robotics Society of India (RSI). It is an academie society founded on 10th July, 2011, which aims at promoting Indian robotics and automation activities.
We all know every coin has two sides, robotics too has a flip side to it. The largest barrier in the development of robots has been the expensive hardware such as sensors, motors etc. Nowadays micro and nano technologies are also used that are exorbitant. Customizing and updating such complex technologies is also an added problem.
With each passing day new advancements in technologies are taking place, a new product launch could possibly be a problem for the people already in the game. Robots cut down labor charges, thus reducing an enormous amount of opportunities of employment for many workers. In several developed countries, scientists are making robotic military forces that can prove dangerous to others in future. As the power and potential of computers will keep expanding, revolution will be created in the field of robotics technology. Imagination is coupled with technology. It would not be wrong to say that in the near future there will be a time when robots will become smarter than the human race and robots might rule over the human race.
“Only one who wanders finds new paths” is a beautifully written proverb.Travelling helps people to get exposed to new places,meet new people, come across their stories, gain experience and sometimes to move out of a hectic boring lifestyle and doing something new and adventurous.
Its well said that India has its ‘Unity in Diversity’ but its not only about religion, culture, art forms and languages.India has diverse geography, numerous historical monuments and a wide variety of trades.From trekking in mountains of Ladakh to boating in the backwaters of Alleppey ,from experiencing nature with a closer view at Kaziranga National Park to sandy beaches and crazy nights in Goa, India truly is a nature’s gift.The variety of linguistic and ethnic groups forms its racial diversity.There is diversity in religion, political beliefs and even the climate of the country from north to south. All these factors attract tourists from all over the world.
Tourism is a flourishing industry in India. People from all over the world are attracted to different tourist destinations of the country. Some of the major tourist attractions are Taj Mahal in Agra, The holi city of Varanasi, The Golden Temple of Amritsar ,The Gateway Of India, Amer Fort, Konark Sun Temple , Qutub Minar , Fatehpur Sikri , Char Minar etc.
The Ministry of Tourism established 25th January as National Tourism day in order to educate people about the benefits of traveling and to spread awareness about the importance of tourism for raising the country’s economy. Travel and tourism sector is one of the key contributors to the Indian economy,Tourism in India contributes around 4.7% in the total GDP of the country according to statistics of 2020. However, the Covid-19 pandemic has negatively affected India’s growth in tourism as the flight services were barred due to the spread of covid-19 virus.
Tourism has benefited the country in several ways like increasing the job opportunities , improved quality of life for locals, assistance to locals with their daily income since they can open small businesses and petty shops, restaurants, transport business and other commercial businesses like shopping malls and hotels. It also contributes to national integration and international friendships and is also a source of earning foreign exchange.
Despite offering so many perks, India’s tourism is still lagging behind.The reasons are over priced taxis and delayed trains making traveling unaffordable and uncomfortable.The absence of decent and hygienic accommodations are adding to the problems.Bad roads,filthy environment make tourists suffer a lot.Besides these problems tourists are often exploited by guides,tour and tourist operators.Usually foreign tourist become victims of theft,kidnapping and other crimes.
Not to forget about the pandemic, which has affected the economy to a great extent.Despite all these difficulties India is recovering over the losses of trades and damage to the economy.
As the 7th largest country in the world, India stands apart from the rest of Asia as India’s potential for tourism is vast.Tourism has promoted national integration and is a highly labour intensive industry. Therefore measures should be taken to strengthen it. Ancient monuments should be protected, traveling should be made safer, accommodation facilities should be readily available. More infrastructure needs to be developed to attract tourists. ‘Incredible India’ campaign should be strongly promoted. It has become a matter of paramount importance if tourism needs to keep flourishing in the country.
Selling first began as soon as the first human beings appeared. From time immemorial, unique locations were created for this purpose, including markets, bazaars, and shopping areas, where you could buy both vegetables and luxury goods from merchants. In those days, no one really thought about making the right arrangements for the merchants and their goods, making it more convenient for the customer to choose and spend more money than they originally planned. However, over time, when a wide variety of goods appeared, merchants had to start to think about how to conveniently and effectively arrange their goods to beat the competition. That’s when category management came into existence.

Category management is the process of assortment management, whereby categories of goods are created and connected based on certain characteristics. For example, various dairy products will include yoghurts, milk, cheese, cottage cheese, etc. The customer can then quickly find the right products. A business can then figure out exactly where and why it has to offer its goods to the consumer. In this regard, the introduction of category management supplies many advantages and obvious convenience for the consumer. What advantages are we talking about, you wonder? Well, let us tell you!
Category management is about more than just organising products and data. It is a clear plan for buying goods, which will, in turn, simplify the work of the business and suppliers because it allows you to:
Category management implies data analysis, a powerful tool for developing any business. Based on data collected by working with category management, a company can optimise its structure, gain a comprehensive view of costs and supply chains, and predict market conditions. Data analysis also makes it possible:
By analysing category management data, you will also be able to allocate business resources to those strategies, products, market trends and promotion channels that are most likely to lead your business to success.
Companies not using categorical management often lack complete and exact information about costs. They do not track and analyse contracts and supplier relationships, focusing solely on sales. However, if the full list of suppliers and their terms are known, so are the costs, which can be reduced or eliminated altogether, like a leak in a hose.
Category management includes using specific tools such as Category Tree to view the customer’s path through an offline or online store. Then, by understanding their path, you can make it more convenient and, as a result, more profitable for your business. After all, any customer is primarily driven by consumer logic and personal needs. So, by understanding and using the laws of human psychology, you can significantly increase the number of sales. For example, the American company Schnucks did it in 1985 when it expanded the counter for high-end baby food. Thanks to the area organisation and considering the needs of their shoppers, their sales increased by 20%!
Category management has plenty of advantages; the most important one is that it ensures the viability of the business for many years to come. After all, using it, a company receives up-to-date and useful information about all internal processes from working with suppliers to organising goods inside the store. As a result, nothing will take it by surprise or expose it to further risks.
B. S. S Bhagavan
Assistant Professor
Research Scholar, Department of English
Vikrama Simhapuri University, Kakutur, Nellore, A.P.
Dr. R. Prabhakar
Associate Professor, Head, Department of English
Research Supervisor, Vikrama Simhapuri University,
Kakutur, Nellore, A.P.
R.W Emerson the father of American Transcendentalism redefined 19th century American culture, literature and religion. In his first work Nature published in 1836. Emerson articulated basic concept of his new philosophy. He advocated spiritualism and preached oneness of Man, God and Nature. While defining basic tenace of his philosophy Emerson relayed upon the blend of the subjective and objective Idealism. Emerson could not ascribe to any specific idealism. This Ontological confusion created intellectual stagnation in his career.
Emerson studied German idealism and Greek philosophy to resolve Ontological riddles. The ancient Greek philosophy persuaded Emerson to dwell into Vedantic Literature. The Vedantic Doctrine of Mayavada helped Emerson to overcome the Ontological obstacles and to give legitimate conclusion to his philosophy.
Keywords: Illusion, Maya, Eternal Bliss, Phenomenal World, Sensory Objects, Delusive power, Ignorance, Realized being and God within.
Introduction:
The Indian non dualistic thought (Advaita Vedanta) advocates Mayavada and defines the nature of the world. The doctrine of Maya defines the absolute being as non-dual and the one without second. It also ratifies the existence of the world and all objects and beings.
R.W Emerson in his doctrine of Illusion presents God as non-dual and all pervasive oversoul. He also explains the nature of the phenomenal world and considers all beings and objects as mere appearance. Emerson in his work Conduct of life deals with primary question that how one can attain spiritual realization and maintain perfect balance between spiritual and family life. According to Emerson man due to his ignorance subjects himself to the illusive power of the world and invites sufferings. The individual can attain liberation through self-realization and self-reliance enables the individual in attaining the purpose of human life. Vedanta also discusses the concept of liberation and highlights the importance of Atmasakshatkara.
Vedanta says the universal being adopts illusion as a method to govern whole creation. This principle also known as the principle of relativity, inversion and oppositional states. According to Advaita they exist only one reality but the individual perceives multiple realities in various forms and names. It argues that multiple objects of this world are buy products of Maya. All the ancient religions advocate same principle (Maya) with various names. Indian non dualistic thought describes the Maya as indefinable because of its fleeting nature and indefinite structures. It says the absolute being dwells in every being and instructs all beings to overcome the delusive power of phenomenal world. The individual due to ignorance misidentifies himself with ego and surrenders himself to the Maya.
Young Emerson during his Harvard days at the age of 17 years composed a poem titled The Superstition. In the poem discussed the delusive power of the world mentioned in Indian scriptures but the content of the poem could not satisfy his readers. Though he had little knowledge of Maya he didn’t study the concept till 1845. Emerson borrowed Bhagavata Purana from his friend Thoreau’s library and read about the Doctrine of Maya. In 1860 in his essay “The Conduct of life” Emerson advocated his doctrine of Illusion. The Doctrine of Maya enriched Emersons philosophy and enabled him to answer all the queries that haunted him throughout his career.
Advaita Vedanta says there is one reality and it manifests itself in various forms and names. The individuals’ constant misperception makes him to accept multiple realities in this world. The individual attributes the qualities of real to the unreal and subjects himself to the power of Maya. Audisankara explains this with the analogy of the Serpent and the Rope. The Indian non-dualistic thought appeals the mankind to relay upon intuition to realize the ultimate reality. It articulates Neeti Neeti method to comprehend the absolute. According to Advaita Vedanta the absolute (God) mocks and instructs the individual through the Maya. According to the doctrine of Maya the nature (the proxy of the absolute) educates the individual and teaches him to see the ultimate reality through the veil of Maya.
Emerson like Vedanta refers the absolute as oversoul in the context of universe and refers the individual soul in the context of man. For Emerson oversoul and the individual soul are one and the same. He finds no distinction between God and the individual. He writes “Who shall define to me an individual? I behold with awe and delight many illustrations of the One Universal Mind. I see my being embedded in it; as a plant in the earth so I grow in God. I amonly a form of him. He is the soul of me. I can even with mountainous aspiring say, I am God, by transferring my me out of the flimsy and unclear precinct of my body, my fortunes, my private will” (Journals. Vol. V. Pp. 336-37).
In all his works he presents the doctrine of infinitude of private man. For him the individual is not mere aboard where the absolute resides but the individual is part and parcel of the oversoul. Vedanta ratifies Emersons idea when it says “know that self alone that is one without a second…and give up all other talks.” (The Thirteen Principal Upanishad. P. 372.). Emerson says our life is in this world like momentary dream which is unreal. “Philosophy affirmsthat the outward world is only phenomenal and the whole concern of dimness, of tailors, of gigs, of balls, whereof men make such account is …an intricate dream”. (Ralph L. Rusx. ed. Letters, I. P. 412; also see Journals, Vol. III. P. 335,481.)
Emerson in his first work Nature describes the nature as the medium through which the divine reveals its mind. He writes in his journals “nature is only the foliage, the flowering, and the fruit of the soul…every part, therefore exists as an emblem and sign of some fact in the soul” (Works. Vol. VIII, P. 53). He argues that the world being the cause can’t exist without the cause (absolute). He further says the world befools the individual with its illusive power. He suggests the mankind to discriminate between right and wrong and to liberate from the Maya. For him the world along with arts, persons, letters, scriptures, religions and etc. are the buy products of Maya. He writes that “towns, landscapes are fugitives like the smoke or snow; the institutions, society and whole world are simply products of illusion. Though thesoul exists in every particle of the universe it stands apart from the phenomenal world.” He says thought the maya is the power of the absolute is distinctive from the Maya. Unlike phenomenal objects the absolute is beyond all constrains. Thus, Emerson’s concept of Nature resembles Vedantic concept of Prakruti in many respects.
Emerson and Vedanta articulate that the individual can liberate himself from the through self-realization in his essay Emerson echoes same idea. When he writes Flow, flow the waves hated, Accursed, adored, the waves of mutation: No anchorage is. Emerson quoted line from Vishnu purana and other Indian scriptures to strengthen his argument. He called Illusion with various names in the essay like Fascination, Trick, Deception and Disguise. He says irrespective of age, sex and race everyone subjected to Maya. He writes “the young mortal enters the hall of the firmament …on the instant, and incessantly, fall snow storms of illusions” (Works, Vol. VI.P.325. “Illusions”). He argues that everything that an individual perceives through his senses or unreal and temporal. He says the one who treats all his experiences feelings and emotions as true can’t liberate himself from the Maya. The world with its illusive power perceives the mankind to go after temporal things. In his essay “Montaigne” he appeals his contemporaries to depend upon the God within escape from the power of Maya.
Emerson in his essay Illusion consider all human emotions like love, anger, jealous etc. as results of human ignorance and Maya. He refers all relations, positions and possessions also products of Maya. According to him the individual invites sufferings and sorrows by surrendering himself to the Maya. Vedanta too emphasizes the same idea it says the individual depends upon the temporal and unreal phenomenal objects derive happiness. Vedanta defines the absolute as pure conscious, existence and bliss (Sat Chit Anand). According to Vedanta the absolute who is the defined as Ananda dwells in every individual. Thus, the individual becomes the embodiment of the bliss which is internal. Man, due to his ignorance instead of depending upon is innate self which is the embodiment of eternal bliss seeks happiness from phenomenal objects which are temporal. Therefore, man subjects himself to the ever-ending desires and sufferings.
Conclusion:
Like Vedanta Emerson argues that the man identifies himself with phenomenal ego and fails to recognize his true self. This attitude of individual makes him the victim of the illusive power of the world. The individual who is the aboard of God within is the real source for fountain of bliss but the individual forgetting this truth seeks happiness from external sources. Both Emerson and Vedanta say the individual can attain Self-realization through intuition and can attain Moksha which is the purpose of life. Thus, the doctrine of Illusion propelled the Vedanta philosophy and Emerson’s Transcendentalism to legitimate conclusion.
References:
At one point in every teacher’s career, they have come to a point where they just feel lost. Their goals seem unattainable, state and district standards are looming overhead and it feels like you are lost and adrift in the ocean. Lessons seem fragmented and like they aren’t building on each other and the kids seem to be just as lost as their instructor. While the initial reaction might be to cry and think that you aren’t cut out to be an educator, don’t! Almost everyone feels this way at some point in their career and the easiest way to find the promised land is to get a curriculum map!

Curriculum mapping is one of the most important skills that a teacher has in their non-teaching arsenal. Many think that the work inside of the classroom is the hard part, but oftentimes that is because preparations haven’t been made outside of the classroom. Before the school year even begins, it is imperative to sit down and plan out your year. Try to answer the following questions ahead of time:
By asking these questions ahead of time an educator can start to form a plan before a child even steps foot in the classroom. Ask any veteran teacher their biggest piece of advice and chances are many will answer with, “be proactive, not reactive”.
Once the big questions above are answered, many wonder where to even go from there. Curriculum mapping can seem like such a daunting task. After all, you are planning out what you are going to do for the whole year right? While the process seems daunting your goal isn’t to account for every minute in the entire school year! For the average teacher that would be planning out 78,120 instructional minutes! The goal with curriculum mapping is to help you plan the following:
With your big pieces placed out it is easy to check the progression and flow of your instruction and to make sure that you have a clear goal for each big unit and have thought out why you are doing the things you are beforehand,
Don’t feel like you need to create everything from scratch! There are a myriad of resources created for teachers, by teachers, that can be extremely helpful. A blank curriculum map template can be useful in helping to guide someone through the curriculum mapping process.

An economic indicator is a piece of economic data, typically on a macroeconomic scale, that analysts use to analyse current and prospective investment opportunities. These metrics can also be used to assess an economy’s overall health.
Economic indicators can be anything an investor wants, but specific data supplied by the government and non-profit groups has grown popular. Some examples of such indicators include, but are not limited to:
Economic indicators are classified into groups or categories. Most of these economic indicators have a set publication schedule, allowing investors to anticipate and plan for certain data at specific times of the month and year.
Leading indicators are used to forecast an economy’s future movements, such as the yield curve, consumer durables, net business formations, and stock prices. These financial guideposts’ figures or data will shift or change before the economy, hence their category name. The information provided by these indicators should be taken with a grain of salt because it is possible that it is erroneous.
Coincident indicators, such as GDP, employment levels, and retail sales, are seen in conjunction with the occurrence of particular economic activities. This type of measure depicts the activity of a certain region or area. This real-time data is closely monitored by many policymakers and economists. Lagging indicators, such as the gross national product (GNP), the consumer price index (CPI), unemployment rates, and interest rates, are only visible after a certain economic activity has taken place. These data sets, as the name implies, show information after an event has occurred. This trailing indication is a technical indicator that appears following significant economic changes.
An economic indicator can only be beneficial if it is accurately interpreted. Economic growth, as measured by GDP, and corporate profit growth have historically had strong relationships. However, predicting whether a company’s earnings would expand solely on a single metric of GDP is practically impossible.
Interest rates, the gross domestic product, and existing house sales or other indicators are all objectively important. Why is it objectively significant? Because what you’re actually looking at is the cost of money, expenditure, investment, and the degree of activity in a significant part of the economy.
Leading indicators predict where an economy will go in the future. The stock market is one of the most important leading indicators. Even while it isn’t the most important leading sign, it is the one that most people pay attention to. If earnings predictions are right, the market can reflect the economy’s trajectory because stock prices factor in forward-looking performance.
A strong market could indicate that profit estimates are rising, implying that total economic activity is increasing. A falling market, on the other hand, could imply that corporate earnings are projected to decrease. However, the stock market’s value as an indicator is limited because performance against estimates is not guaranteed, therefore there is a risk.

Ratan Naval Tata is the Chairman of Tata Sons and Tata Group, and is one of India’s most well-known and respected industrialists. Tata, who is 73 years old, is the chairman of one of the country’s largest conglomerates, which includes approximately 100 companies with a combined revenue of USD 67 billion. Tata Steel, Tata Motors, Tata Teleservices, Tata Power, Tata Consultancy Services, Tata Tea, Tata Chemicals, and The Indian Hotels Company are among his key Tata firms.
Tata was born on December 28, 1937, into one of Mumbai’s wealthiest families. Jamsedji Tata, the Tata group’s founder, was his great grandpa. Tata had a tumultuous life following his parents’ divorce when he was a child. Lady Navajbai, his grandmother, nurtured him in the lap of luxury at Tata Palace. The Tata scion was captivated by America, and he attended Cornell University to study architecture and structural engineering. Later, he attended Harvard University for a management course.
He joined the Tata Group in 1962, and his first position was with the Tata Steel division in Jamshedpur, where he worked with blue-collar workers shoveling stone and operating furnaces. In 1971, he was named Director-in-Charge of the National Radio and Electronics Company Limited (Nelco), and he was successful in bringing the company around.
Tata rose through the ranks of Tata Industries to become Chairman and was a driving force behind a slew of reforms. Tata Consultancy Services went public under his leadership, and Tata Motors was listed on the New York Stock Exchange, giving it greater international strength and prestige. He is recognised with directing the Tata Group’s successful bid for Corus, an Anglo-Dutch steel and aluminium company, as well as the Ford Company’s Jaguar and Land Rover brands.
During his leadership, the company saw the birth of the ‘Indica,’ India’s first fully Indian car. Tata was the designer of the vehicle. Tata’s food division purchased tea company Tetley for GBP 70 million in 2000. The group’s revenues increased nearly 12-fold in 2009-10, totaling USD 67.4 billion. Tata is also a member of the boards of Fiat SpA and Alcoa, as well as the international advisory boards of Mitsubishi Corporation, American International Group, JP Morgan Chase, Rolls Royce, Temasek Holdings, and the Singapore Monetary Authority.
He was awarded the Padma Bhushan by the Indian government in the year 2000. Ohio State University awarded him an honorary doctorate in business administration, the Asian Institute of Technology in Bangkok awarded him an honorary doctorate in technology, and the University of Warwick awarded him an honorary doctorate in science. Tata’s personal fortune is around GBP 300 million, and he controls less than 1% of the conglomerate. Charitable trusts own about two-thirds of Tata Group, which helps to fund good causes.
During the 26/11 attacks, Tata offered a magnificent example of charity and leadership. He stood alone outside the Taj hotel, unarmed, and oversaw the actions aimed at assisting the victims. He shown his humanity by paying personal visits to the families of all 80 colleagues murdered or injured. He didn’t leave any stone unturned in his efforts to help the victims, even asking the victims’ families and dependents what they wanted him to do.
Tata has begun making arrangements for his post-retirement life, despite the fact that his retirement is still a year away. He intends to establish an international design centre with international standards and size. He has led the development of a number of groundbreaking designs and products, the most well-known of which is Nano. His concern for the safety of nuclear families commuting on two-wheelers inspired him to create Nano. He was the one who suggested that the little car just have one wiper on the windscreen. Its pricing and maintenance costs were decreased as a result.
He also spearheaded a strategy to deliver affordable and safe drinking water, and aided a group of Pune-based designers in developing Swach, a water filter that costs less than Rs 1,000. This 560-mm water purification device was created over the course of more than three years by Design Directions Private Limited.
Tata, who is a bachelor in real life, values privacy and avoids the limelight. Only CDs, books, and pets keep him company. In an unassuming Tata sedan, the business baron drives himself to work.
Ratan Tata, who stands tall among his peers with a tremendous fortune and global fame, has remarkably never made the Forbes billionaires list.

Despite the fact that beauty is in the eyes of the beholder, the global beauty industry has never lost its allure. Along with its steady growth, the industry has amassed a dedicated consumer base that spans decades. What female doesn’t enjoy applying cosmetics? Cosmetics play an important role in all of our lives, from learning makeup tutorials to experimenting with different colours on free days, to adorning ourselves before a wedding or an event, or simply putting on a fast light application before work.
But we’ve come a long way from the days when consumers had to go to a cosmetic store in person. Why should brick-and-mortar stores be the sole option when there are now online cosmetics stores that allow clients to order things at any time and from anywhere.
Despite the fact that the Indian economy has hit unprecedented lows in recent months as a result of the COVID19 pandemic-induced shutdown, a handful of platforms have remained stable.
Nykaa is one of these e-commerce platforms for beauty and wellness products, and it has quickly become the preferred option for all cosmetic aficionados in India. Anyone who has even a passing interest in beauty and wellness products has probably heard of Nykaa at some time in their lives. This is an e-commerce site that specialises in cosmetics and beauty products. This platform, which was founded in 2012, has played a critical part in dispelling the idea that e-commerce and beauty retail do not do well in India.
Nykaa has recently become a household name across the country following the launch of its initial public offering (IPO) and the overwhelming response it received, propelling its founder Falguni Nayar into India’s elite group of self-made billionaire women, with her net worth rising to $6.5 billion following Nykaa’s record listing, according to Bloomberg Billionaires Index.
We will provide you an overview of Nykaa’s platform, its creators, its business model, its funding, its success story, its inception, and its growth through this blog.
Nykaa is an Indian cosmetics company that specialises in multi-beauty and personal care items. It started off as a solo e-commerce platform before expanding into a variety of retail locations across the country.
For both women and men, the company specialises in providing a broad variety of cosmetics, skincare, haircare, perfumes, bath & body, luxury, and wellness items. The portal, which claims to receive more than 1.5 million visitors per month from across India, permits adequately prepared and priced branded products.Nykaa now has three types of stores: Luxe, On Trend, and Kiosks. Nykaa’s Luxe stores contain more premium and luxury brands such as Estee Lauder, Dior, Huda Beauty, and M.A.C Cosmetics, among others, whereas Nykaa On Trend goods are limited to trending and fashionable names.
Aside from women’s beauty, the Nykaa Man website and app, as well as Nykaa Network, an online community for beauty enthusiasts, offer a variety of grooming products for males. Currently, the organisation is a staunch believer in focusing on the vertical market.
Falguni Nayar, an MBA graduate from IIM Ahmedabad, began working in investment banking at Kotak Mahindra immediately after graduation. She was promoted to Managing Director of the same bank division in 2005.
She worked for Kotak Mahindra for nearly 18 years, during which time she decided to branch out from banking and try her hand at other fields.
She saw the untapped potential of the online beauty industry. Since there was a scarcity of accessible online brands and things that people could trust and purchase with confidence at the time, she saw an opportunity for Nykaa.
She wanted to change Indian women’s perceptions about personal grooming because she was passionate about makeup and beauty products. Nykaa was founded in 2012 by her with the goal of creating something unique to her.

Nykaa was founded in 2012, and it all started there. Falguni Nayar was looking for a promising business opportunity in India when she discovered an inconsistency in the beauty items market in India, which wasn’t up to par with the product’s scope in other countries such as France or Japan, despite high demand, owing to a lack of product availability in many places. As a result, she and her husband, Sanjay Nayar, founded Nykaa.
Nykaa’s inventory methodology is essentially what distinguishes it from its competitors. The products are obtained through brands and distributors and then sold directly to consumers in this arrangement. This is in contrast to a marketplace model, in which third-party vendors offer the products. Nykaa will be able to keep a tighter grip on its items as a result of this, reducing the chances of forged things making their way onto the platform.
Nykaa’s Recent Growth
As reported by Your Story, recently in the month of July of this year, the platform stated that it’s in-house brand Nykaa Beauty has now penetrated into travel care as well as home necessities.
Nykaa’s platform has come a long way since its inception in 2012, and it currently plays an important part in the advancement of the beauty industry.
Nykaa currently has over 5 million monthly active users, 80 outlets across India, and over 500 brands and 130,000 products available via its website, app, and stores, according to CNBCTV18.
The company’s transformation from an online approach to an omnichannel retail model has been a critical factor in its growth. This transformation has had a significant impact on how the brand is currently perceived by its audience, as well as allowing the brand to reach out to a previously untapped demographic.
According to the Economic Times, the Nykaa Fashion label has recently expanded into the intimate apparel area with its Nykd brand. On October 22, 2021, Nykaa announced the acquisition of Dot & Key Wellness, a domestic skincare platform.
Customers are gravitating toward key categories like personal skin and hair care, according to Nykaa.
Nykaa’s Initial Public Offering
The platform’s initial public offering (IPO) was open for subscription from October 28 to November 1, with a price range of Rs 1,085-1,125 per share. Nykaa’s initial public offering (IPO) was oversubscribed by 81.78 times the 2.64 crore shares available.
Investors reacted positively to Nykaa’s initial public offering (IPO) when it was made available for subscription. The platform was launched on the BSE and NSE on November 10 and entered the 1 lakh crore market capitalization club when its stocks closed at 2,206.70, nearly double the issue price, valuing the beauty firm at about $14 billion.
Nykaa Investments is a company that invests in startups.
Nykaa’s parent firm, FSN E-Commerce Ventures Ltd, said on October 27th 2021 that it had raised roughly Rs 2,396 crore from anchor investors prior to its IPO.
Fidelity Management & Research Company has previously invested an undisclosed amount in the platform in late November 2020.
In early April of 2020, the platform received a new $13 million fundraising round from previous investor Steadview Capital, cementing its status as a unicorn.
Steadview Capital, TPG Growth, and Lighthouse Funds are among the platform’s top investors.
Nykaa has recently launched a number of new items, some of which have celebrity endorsements. In addition, they have included a number of new collections into their personal brand.
The platform has a number of big plans in the works to make a difference in the beauty and fashion business, and it’s been focusing on delivering high-quality items and services to users at reasonable prices. The platform has a lot of promise in terms of developing and gaining a more dominant position in the future.

Among all the good qualities that effective leaders bring to the workplace, research has proven that our emotional intelligence is more reliable than our IQ in predicting overall success (EI). EI is described as the ability to perceive and effectively manage our own and others’ personal emotions.
A strong proclivity for emotional intelligence, according to research published in the American Journal of Pharmaceutical Education, improves one’s ability to make sound decisions, build and sustain collaborative relationships, deal effectively with stress, and cope to a greater degree with constant change. To wit, it enables an individual not only to perform well in the workplace, but also in accomplishing various other goals and objectives in his or her life.
EI is also important for workplace conflict resolution, which entails being able to guide others through uncomfortable circumstances, politely bringing issues to the surface, and establishing solutions that everyone can agree on. Leaders who take the time to comprehend other points of view attempt to find a middle ground in conflicts. You can try to make others feel heard by paying attention to how others respond to one another, which will make them more open to compromise.
Emotional intelligence in the workplace begins with each individual from the inside out.. It entails understanding different parts of your feelings and emotions, as well as devoting time to developing self-awareness, self-regulation, motivation, empathy, and social skills. The online Master of Arts in Leadership (MAL) degree from Ottawa University provides you with the tools to assess and analyse your emotional intelligence levels. You’ll also learn ways for increasing your emotional intelligence at various phases of your career.
The 5 Elements of EI by Goleman
So, how does emotional intelligence play a role in workplace leadership? Emotional intelligence contains five critical parts, according to Daniel Goleman, an American psychologist and author of the breakthrough book “Emotional Intelligence.” When controlled, these elements enable leaders achieve a greater level of emotional intelligence.
Self-Awareness
Emotional intelligence includes the ability to detect and understand one’s own emotions. Being aware of the impact of your behaviours, moods, and emotions on others goes beyond simply acknowledging your emotions. You must be able to monitor your own emotions, recognise different emotional reactions, and accurately name each feeling in order to become self-aware. Self-conscious people are also aware of the connections between their feelings and their actions.
Self-Regulation
The ability to control and manage your emotions, which isn’t to mean that you’re putting your emotions on hold and disguising your genuine feelings. It merely entails waiting for the appropriate moment and location to express them. It’s all about expressing your emotions in a healthy way when it comes to self-regulation. Self-regulators are more adaptable and versatile in their approach to change. They’re also skilled at defusing stressful or challenging situations and managing conflict.
Motivation
In emotional intelligence, intrinsic motivation is also important. People who are emotionally intelligent are motivated by factors other than monetary gain, recognition, or acclaim. Instead, they are driven by a desire to meet their own personal demands and objectives.
Empathy
Empathy – or the ability to comprehend how others are experiencing – is an essential component of emotional intelligence. However, it entails more than merely being able to perceive others’ emotional states. It also includes how you respond to others based on the information you’ve gathered. How do you react when you notice someone is unhappy, depressed, or disheartened? You may show them more care and concern, or you could make an attempt to cheer them up.
Social Skills
Another key part of emotional intelligence is the ability to interact well with people. True emotional knowledge entails more than just thinking about your own and others’ feelings. You must also be able to apply this knowledge in your everyday interactions and conversations. Managers gain from being able to form relationships and connections with their staff in professional situations. Workers gain from being able to form strong bonds with their supervisors and coworkers. Active listening, vocal communication skills, nonverbal communication skills, leadership, and persuasiveness are all important social skills.
Given all of these considerations, it’s easy to see why emotional intelligence is important in the workplace. If this research-based theory piques your interest as a business professional, a graduate degree in leadership might be perfect for you. Ottawa University’s online Master of Arts in Leadership programme is the best, fastest, and most economical in Kansas City. The Accreditation Council for Business Schools and Programs (ACBSP) has granted this competitive programme accreditation, indicating the excellent quality of business education provided. Ottawa University and its online programmes have been ranked near the top of the best colleges in Kansas City by U.S. News & World Report.

Gender inequality is visible in girls’ and boys’ homes and communities on a daily basis — in textbooks, the media, and among the adults who care for them.
Parents may shoulder disproportionate home responsibilities, with females shouldering the burden of caregiving and chores. Women make up the bulk of low-skilled and underpaid community health workers who work with children, with few opportunities for advancement.
In addition, many females receive less help in school than boys in order to pursue the studies they choose. This occurs for a number of reasons: Girls’ safety, hygiene, and sanitation needs may be overlooked, preventing them from attending class on a regular basis. Gender disparities in learning and skill development are also a result of discriminatory teaching styles and educational resources. As a result, approximately one out of every four girls between the ages of 15 and 19 is unemployed or in school or training, compared to one out of every ten boys.
Gender inequalities in early childhood, however, are minor. Girls have a better rate of survival at birth, are more likely to be on track developmentally, and are equally as likely to attend preschool. In every country where data is available, girls exceed boys in reading among those who reach secondary school.
Adolescence, on the other hand, can provide substantial challenges for females’ well-being. Unwanted pregnancies, HIV and AIDS, and malnutrition are all increased by gender stereotypes and discrimination. Girls are shut off from the information and equipment they need to stay healthy and safe, especially in emergency situations and locations where menstruation is still taboo.
Gender discrimination can become violent in its most insidious form. Around 13 million girls between the ages of 15 and 19 have been subjected to forced sex. Adolescent girls are the most vulnerable to gender-based violence in both peace and conflict. Hundreds of millions of girls around the world are still subjected to child marriage and female genital mutilation, despite the fact that both have been recognised as human rights crimes internationally. And violence can occur during childbirth, especially in areas where female infanticide is a problem.
At the highest levels, harmful gender norms are promoted. In other nations, laws and policies that fail to safeguard – or even violate – girls’ rights, such as laws prohibiting women from inheriting property, become established. Gender norms affect boys as well: social ideas of masculinity can fuel child labour, gang violence, school dropouts, and armed group recruitment.
Despite significant obstacles that continue to deny them equal rights, girls are unafraid to pursue their dreams. The globe has seen unequal progress since the signing of the Beijing Declaration and Platform for Action in 1995 – the most comprehensive policy agenda for gender equality.
Girls are attending and completing school in greater numbers, and fewer are marrying or becoming moms while still children. Discrimination and stereotypes persist, however. Girls face new problems as a result of technological advancements and humanitarian crises, while old ones — violence, entrenched biases, and limited learning and life chances – endure.
That is why young women from all areas of life are speaking out against inequity. Stopping child marriage and female genital mutilation, demanding action on climate change, and breaking new ground in the fields of science, technology, engineering, and math (STEM) are all examples of girl-led groups asserting their authority as global change-makers.

The suffocating sensation of your heart pounding and sinking, the feeling that there is nothing good around you and that you are continually imprisoned in a circle of negativity. This is something that we have all experienced at some point in our lives. Especially in light of the current situation, where the entire planet is on the verge of collapsing due to a virus, every occurrence sends shivers down the spines of practically everyone. We can’t deny that the current situation is having an impact on our emotional and mental health, but we also can’t give in and lose our spark in these trying times. We must gather our resources and fortify our resolve.
Nobody can tell you that being anxious is abnormal. What isn’t natural is succumbing to it and allowing your anxiety to rule you. You’ll need to gather your thoughts and get a handle on the problem before you can figure out how to cope with it. Anxiety is associated with feelings of despair, pessimism, and agitation, yet there are various ways to overcome it.
Nobody can tell you that being anxious is abnormal. What isn’t natural is succumbing to it and allowing your anxiety to rule you. You’ll need to gather your thoughts and get a handle on the problem before you can figure out how to cope with it. Anxiety is associated with feelings of despair, pessimism, and agitation, yet there are various ways to overcome it.
1. Mindfulness
When you feel like everything around you is spinning out of control and you need some time to step back and reflect on your life. Close your eyes and meditate while sitting calmly in fresh air. Meditation is a state of relaxation in which your body is put on hold and all of your muscles relax. The blood pressure drops, and you can feel your thoughts all the way down to your bones. You don’t need to have a clear mind right now; the thoughts that are bothering you need to be acknowledged, and only then will they be expelled from your system.Take a few deep breaths to relax and feel all of the stress leave your body. Be open to all of the positive energies that are all around you.
Meditation is an excellent approach to reduce stress and quiet a racing mind. It is the simplest method to set aside some time to address and eliminate the things that cause you anxiety.
2. Get some rest
A good sleeping routine is required to equip your body to withstand all of life’s instabilities. As simple as it may appear, only those who suffer from anxiety difficulties understand how tough it is to go asleep when your thoughts are continuously bothering you. Anxiety causes insomnia and prevents people from falling into a deep slumber where they are free of all their worries and thoughts. But, as a last note, if you hit the hay and are able to sleep nonetheless, there is a greater possibility that when you wake up, you will feel relaxed and refreshed.
‘A good night’s sleep is the bridge between despair and hope,’ as the saying goes. So, do the same and try to fall asleep, allowing your body and mind to recover on their own.
3. Communicate with your loved ones.
There is nothing more therapeutic than sitting down with your family, close friends, or loved ones and having a great talk with them about all the serious and random things that come up. You can moan, rant, and express your insecurities and thoughts, and after a while, you’ll find a heavy load of anxiety lifting from your shoulders. It is always recommended to people who are dwelling into the phase of anxiety to have a few people in their life who can be their 3 Am friends and can lend an ear to all the venting sessions to help them calm down.
Having a full-fledged conversation involving various shades of emotions helps in calming the mind as well as lightens the mood to its best.
‘A good night’s sleep is the bridge between despair and hope,’ as the saying goes. So, do the same and try to fall asleep, allowing your body and mind to recover on their own.
4. Treat yourself to a spa day.
It never hurts to indulge yourself. Even if you’re in the midst of a bad mood, a little self-pampering can completely transform your mood. It is not necessary to go all out, but a little something goes a long way. Give yourself a pedicure/manicure, take a hot shower, mask, and wear your favourite dress with subtle make-up to notice how this changes the entire dismal attitude into one of self-love.
‘A good night’s sleep is the bridge between despair and hope,’ as the saying goes. So, do the same and try to fall asleep, allowing your body and mind to recover on their own.
Anxiety is a mental illness, and sitting in a corner and obsessively thinking about it isn’t going to assist you at all. Instead, make an effort to do something that makes you feel wonderful and energised.
5. Relax and let go
This is a skill that we all need to master: letting go. You have no control over your cognitive process, and you never know what will trigger you at any given moment. There are a number of factors that make you anxious, and you have no control over them. The greatest approach to get rid of all these ideas and reflections is to scribble them down on a piece of paper. You must let go of everything that is bothering you on the inside and is affecting your mental well-being. And writing it out on paper and clearing your mind of all the toxicity will help you feel better in every way.
Writing is the best way to express oneself, and using it to liberate yourself from all the thinking thoughts in your head is one of the most effective ways to do so.
These were the five most straightforward methods for dealing with anxiety and overcoming it. Anxiety leads to sadness, and the only way to avoid slipping into this pit is to face your problems head on and deal with them as effectively as possible.
Anyone who notices a loved one showing signs of anxiety disorders needs to be empathic towards them so that they can be guided through the process with the utmost care and concern. If you believe they require professional care, do not hesitate to contact a psychologist or therapist to assist them in maintaining their mental health.
We hope that we were able to shed some light on the various approaches of dealing with anxiety problems. None of us is an expert or a licensed psychologist, so these suggestions are just based on our own personal experiences. They might work for some and might not for others, make sure to leave your comments down below if any of these were useful for you to relax your mind and soul and proved to be therapeutic for you.

Many teenagers’ lives are dominated by social media. According to a 2018 Pew Research Center survey of almost 750 13- to 17-year-olds, 45 percent of them are almost always online, and 97 percent use a social media platform like YouTube, Facebook, Instagram, or Snapchat But how does social media use affect teenagers?
Teens can use social media to establish online identities, engage with others, and form social networks. These networks can be extremely beneficial to youth, especially those who are socially excluded, have impairments, or suffer from chronic illnesses.
Social media is also used by teenagers for enjoyment and self-expression. Furthermore, the platforms can educate kids on a range of topics, including healthy behaviors, by exposing them to current events, allowing them to interact across geographic barriers, and exposing them to current events. Humorous or distracting social media, as well as social media that gives a genuine connection to peers and a large social network, may even help kids avoid sadness.
Social media use can also negatively affect teens, distracting them, disrupting their sleep, and exposing them to bullying, rumor spreading, unrealistic views of other people’s lives and peer pressure.
The risks might be related to how much social media teens use. A 2019 study of more than 6,500 12- to 15-year-olds in the U.S. found that those who spent more than three hours a day using social media might be at heightened risk for mental health problems. Another 2019 study of more than 12,000 13- to 16-year-olds in England found that using social media more than three times a day predicted poor mental health and well-being in teens.
Other research has found a link between excessive social media use and depression or anxiety symptoms. Greater social media use, midnight social media use, and emotional involvement in social media — such as feeling upset when unable to go on — were all connected to poor sleep quality and higher levels of anxiety and despair in a 2016 research of more than 450 teenagers.
The influence of social media may also be determined by how teens utilise it. According to a 2015 study, social comparison and feedback seeking by teenagers on social media and telephones is associated with depressed symptoms. Furthermore, a tiny 2013 study indicated that older teenagers who used social media passively, such as by just looking at other people’s images, had lower life satisfaction. These declines did not affect those who utilised social media to communicate with others or upload their own content.
And, according to a previous study on the impact of social media on undergraduate college students, the longer they used Facebook, the stronger their opinion that others were happier than they were. However, the more time students spent socialising with their peers, the less they felt this way.
Experts believe that kids who post information on social media are at danger of disclosing intimate images or highly personal stories due to their impulsive natures. Teens may be bullied, harassed, or even blackmailed as a result of this. Teens frequently make posts without thinking about the repercussions or privacy issues.
You can take steps to encourage ethical social media use and mitigate some of its negative impacts. Consider the following suggestions:
Set sensible boundaries: Discuss with your teen how to keep social media from interfering with his or her activities, sleep, food, or homework. Encourage teens to follow a sleep ritual that excludes the use of electronic media, and keep cellphones and tablets out of their rooms. Set a good example by adhering to these guidelines.
Keep an eye on your teen’s social media profiles: Let your teen know that you’ll be reviewing his or her social media accounts on a frequent basis. You should try to perform it at least once a week. Make certain you complete the task.
Describe what isn’t acceptable: Encourage your kid not to gossip, spread rumours, bully, or harm someone’s reputation, whether online or off. Discuss what is proper and safe to publish on social media with your teen.
Encourage your pals to interact with you in person: This is especially crucial for teenagers who are prone to social anxiety.
Talk about social media: Discuss your own social media usage. Inquire about your teen’s use of social media and how it makes him or her feel. Remind your adolescent that social media is full of unreasonable expectations.

Every business’s story usually includes the four stages of a corporate life cycle. The specifics, as well as the length of time a corporation spends in each, will differ. Some businesses will experience setbacks or readjustments, forcing them to return to a former stage. Others, on the other hand, may take a different approach to the final stages. In this post, we go through what happens at each stage of the corporate life cycle in greater depth.
A corporate life cycle is the progression of a company’s growth and development from its inception to its eventual demise, which can occur in a variety of ways. A business’s life cycle is divided into four stages:
This is the stage at which the company begins planting, developing, or launching its product or providing services to customers. This can be separated into two parts: pre-launch research and fund-raising, and post-launch product or service production and launch.
The start-up stage spans the time between when a business is founded and when it reaches its first key degree of stability. The company’s founder creates prototypes or pilots, solicits and analyses comments, and seeks out potential investors or sources of funding at the start of this stage.
Other aspects of the start-up stage include:
Financial: The primary purpose of this stage is to secure funding. Owners require finances in order to rent space, purchase raw materials, pay employees, and purchase advertising. Due to the necessity to pay for capital inputs, startup costs are typically substantial, and profit will most certainly lag behind sales.
Personnel: Typically, the owner is the face and name of the company, and their identity is often confused with that of the brand. All significant decisions are made by the owner or owners, who, in the case of small beginning enterprises, play various responsibilities. They may create their own marketing materials, run their own social media accounts, manufacture or provide services, and keep track of their finances.
Goal: At this stage, the most important goals are to gain awareness and attract enough clients to fund the costs of running the business. Another goal is to constantly and swiftly refine the company’s offerings in order to respond to market changes.
The company has begun to generate steady income at this point, and both cash flow and revenue have improved. Redefining goals, reorganising departments, establishing an unified marketing plan, and beginning to explore community and business relationships are all good things to do now. Also, at this time, a distinct company culture may have evolved.
The following are some features of the growth and establishment stage:
Financial: Financially, the company’s profits should readily support wages and overhead at this time. Sales are likely to rise, and profit margins are likely to widen as the company continues to pay off capital investments and loans.
Personnel: Owners become more strategic in their hiring. This is when business leaders assemble teams to whom they entrust their vision and message communication. This necessitates clearly defining duties and responsibilities, as well as carefully hiring personnel who are capable of doing them. This will provide the owner more time and bandwidth to deepen existing client connections while also exploring new opportunities with potential clients and partners.
Goals: Profit and brand-driven goals are the primary objectives at this level. The company wants to be profitable in order to attract extra funding for expansion. It also aspires to increase its market share and strengthen its position.
Sales may have reached a halt at this point, and earnings should remain stable. A corporate structure with levels of hierarchy and clearly defined positions is normally in place. The company has a well-defined business model and has a steady stream of clients and consumers, with new accounts being added on a regular basis. This may be the ultimate goal for several business owners. For many businesses, the expansion phase is when they look to expand their product line, launch new services, branch out into a new field, or enter a new geographic market.
This stage also has the following characteristics:
Financial: This stage is marked by slow but steady increases in profits and revenue, though the direction of the gain may start to flatten out as competitors and new entrants compete for market share. This is also the time to take advantage of that expansion by combining and utilising available money for expansion—almost like a second stage of the first. If a large-scale expansion is in the works, this could be the time when the company brings in new investors to help fund the project.
Personnel: Management and senior managers may be fully entrenched in their roles at this point.This stage also has the following characteristics: Managers are in charge of departments, and they have specific guidelines. Owners sometimes remove themselves from the equation at this stage as the needs of the firm change, employing individuals to run the company totally so they can focus on new initiatives or new directions within the same company.
Goals: At this stage, a mature company’s goals are focused on reinvention in order to stay relevant in a rapidly changing field.
This phase of the corporate life cycle could be viewed as a conclusion or a new beginning. Businesses frequently have two options: expand the business to the point of reinvention or withdraw completely, either by selling the company or handing over complete management to executives. There will be some overall change if the decision is taken to keep the company. Perhaps fresh branding, packaging, or a newly redesigned product to breathe new life into an old favourite.
Financial: At this point, business owners can go in a variety of directions. They can either reinvest in the firm to restart the growth cycle, or they can cash out by selling the company to another company or to the current management. Despite the fact that revenues appear to be stable at this point, practically all businesses will eventually find themselves sharing the market with competitors.
Personnel: The company has dispersed leadership at this time, and departments can operate within well-organized structures.
Goals: The fundamental purpose of this stage of the corporate life cycle is strategic planning. The owner and management must decide whether to continue the same, expand and evolve, or sell the business.
The Falcon super heavy launch vehicle was designed to transport people, spaceships, and various cargos into space. Such a powerful unit wasn’t created instantly and it also had its predecessors. The history of the Falcon family of vehicles began with the creation of the Falcon 1- a lightweight launch vehicle with a length of 21.3 meters and a diameter of 1.7 meters and a launch mass of 27.6 tones; the rocket could carry 420 kilograms or 926 pounds of payload on board. It became the first private device that was able to bring cargo into low earth orbit. Construction of the Falcon 1 of only two stages, the first of them consisted of a supporting element with fuel tanks, an engine and a parachute system. They chose kerosene as the fuel and liquid oxygen became its oxidizing agent.

The second stage also contains fuel tanks and an engine; though the latter had less thrust compared to the one in the first stage despite the huge launch cost $7.9 million. Totally five attempts were made to send the Falcon 1 beyond the of our planet nut not all of them were successful. During the debut launch of the rocket, a fire started in the first stage engine; this led to a loss of pressure which caused the engine to shut down in the 34th second of flight. The second attempt to start the Falcon 1 incurred a problem with the fuel system of the second stage fuels stopped flowing into its engine at 474 second of flight it shut down as well. The third time of the Falcon 1 went on a flight, it wasn’t alone of the serious cargo the rocket carried onboard the trailblazer satellites and to NASA microsatellites. In phase one with the first stage he flight went normally but when the time came to separate the stages, the first hit the second when it started engine, so the second stage couldn’t continue its flight.
The fourth and fifth launches shoed good results but that wasn’t enough. The main problem with Falcon 1 was low demand due to its low payload abilities. For this reason, they designed Falcon 9; this device can carry on onboard 23 tons of cargo. It’s also a two stage launch vehicle and uses kerosene and l liquid oxygen as fuel. The device is currently in operation and the cost of its launch is equal to $62 million. The first stage of the rocket is reusable; it can return to earth and can be used again. The Falcon 9 is designed to not only launch commercial communication satieties but also to deliver dragon 1 to the ISS. Dragon 1 can carry a six ton payload from the earth, this drone supplies the ISS with everything they needs and it also takes goods back.
The dragon 2 is designed to deliver a crew of four people to the ISS and back to earth. Now there is an ultra heavy launch vehicle with a payload capacity of almost 64 tones. It is the most powerful and heavier device called the Falcon heavy. This rocket was first launched on February 6th 2018 and the test was successful. The rocket sent Elon Musk’s car into space- a red Tesla Roadster. After this debut subsequent launches were also conducted without problem. The launch cost is estimated to $150 million.

The first stage of the Falcon heavy consists f three parts. There are three blocks contain 27 incredibly powerful engines in nine each one. The thrust created when takeoff is comparable to 18 Boeing 747s at full power. The second stage is equipped with a single engine. It is planned that the device would be used for missions to the moon and mars. Currently, SpaceX working on the starship manned spacecraft. According to its creators, this device will be much larger and heavier than all of the company’s existing rockets. This device will able to deliver cargo into space weighing more than a hundred tons. The launch of starship into pace is planned for 2022 to mars with a payload. Who knows, one of the mankind’s largest dreams may come true within the next year.
The MRI (Magnetic resonance imaging) scan is a medical imaging procedure that uses a magnetic field and radio waves to take pictures of our body’s interior. It is mainly used to investigate or diagnose the conditions that affect soft tissue such as tumors or brain disorders. The MRI scanner is a complicated piece of equipment that is expensive to use and found only in specialized centers. Although Raymond Vahan Damadian (1936) is credited with the idea of turning nuclear magnetic resonance to look inside the human body, it was Paul Lauterbur (1929-2007) and Peter Mansfield (1933) who carried out the work most strongly linked to Magnetic resonance imaging (MRI) technology. The technique makes use of hydrogen atoms resonating when bombarded with magnetic energy. MRI provides three dimensional images without harmful radiation and offers more detail than older techniques.

While training as a doctor in New York, Damadian started investigating living cells with a nuclear magnetic resonance machine. In 1971 he found that the signals carried on for longer with cells from tumors than from healthy ones. But the methods used at this time were neither effective nor practical although Damadian received a patent for such a machine to be used by doctors to pick up cancer cells in 1974.
The real shift came when Lauterbur, a U.S, chemist, introduced gradients to the magnetic field so that the origin of radio waves from the nuclei of the scanned object could be worked out. Through this he created the first MRI images in two and here dimensions. Mansfield, a physicist from England, came up with a mathematical technique that would speed up scanning and make clearer images. Damadian went on to build the full body MRI machine in 1977 and he produced the first full MRI scan of the heart, lungs, and chest wall of his skinny graduate student, Larry Minkoff – although in a very different way to modern imaging.
Working of an MRI machine
The key components of an MRI machine are magnet, radio waves, gradient, and a super advanced computer. We all know that human bodies are made up of 60% water, and water is magnetic. Each of the billons of water molecules inside us consists of an oxygen atom bonded to two hydrogen atoms that are called as H2O. Small parts of the hydrogen atoms act as tiny magnets and are very sensitive to magnetic fields. The first step in taking an MRI scan is to use a big magnet to produce a unified magnetic field around the patient. The gradient adjusts the magnetic field into smaller sections of different magnetic strengths to isolate our body parts. Take brain as an example, normally the water molecules inside us are arranged randomly. But when we lie inside the magnetic field, most of our water molecules move at the same rhythm or frequency as the magnetic field. The ones that don’t move along the magnetic field are called low energy water molecules. To create an image of a body part, the machine focuses on the low energy molecules. The radio waves move at the same rhythm or frequency as the magnetic fields in an MRI machine.
By sending radio waves that match or resonate with the magnetic field, the low energy water molecules absorb the energy they need to move alongside the magnetic field. When the machine stops emitting radio waves, the water molecules that had just moved along the magnetic field release the energy they had absorbed and go back to their position. This movement is detected by the MRI machine and the signal is sent to a powerful computer which uses imaging software to translate the information into an image of the body. By taking images of the body in each section of the magnetic field the machine produces a final three dimensional image of the organ which doctors can analyze to make a diagnosis.
“Medicine is a science of uncertainty and an art of probability”. –William Osler
Expressing oneself through art seems a universal human impulse, while the style of that expression is one of the distinguishing marks of a culture. As difficult as it to define, art typically involves a skilled, imaginative creator, whose creation is pleasing to the senses and often symbolically significant or useful. Art can be verbal, as in poetry, storytelling or literature or can take the form of music and dance. The oldest stories, passed down orally may be lost to us now, but thanks to writing, tales such as the epic of Gilgamesh or the Lliad entered the record and still hold meaning today. Visual art dates back 30,000 years, when Paleolithic humans decorated themselves with beads and shells. Then as now, skilled artisans often mixed aesthetic effect with symbolic meaning.

In an existence that centered on hunting, ancient Australians carved animal and bird tracks into their rocks. Early cave artists in Lascaux, France, painted or engraved more than 2,000 real and mythical animals. Ancient Africans created stirring masks, highly stylized depictions of animals and spirits that allow the wearer to embody the spiritual power of those beings. Even when creating tools or kitchen items, people seem unable to resist decorating or shaping them for beauty. Ancient hunters carved the ivory handles of their knives. Ming dynasty ceramists embellished plates with graceful dragons. Modern pueblo Indians incorporates traditional motifs in to their carved and painted pots. The western fine arts tradition values beauty and message. Once heavily influenced by Christianity and classical mythology, painting and sculptures has more recently moved toward personal expression and abstraction.
Humans have probably been molding clay- one of the most widely available materials in the world- since the earliest times. The era of ceramics began, however, only after the discovery of that very high heat renders clay hard enough to be impervious to water. As societies grew more complex and settled, the need for ways to store water, food, and other commodities increased. In Japan, the Jomon people were making ceramics as early as 11,000 B.C. by about the seventh millennium B.C.; kilns were in use in the Middle East and china, achieving temperatures above 1832°F. Mesopotamians were the first to develop true glazes, through the art of glazing arguably reached its highest expression in the celadon and three color glazes of the medieval china. In the new world, although potters never reached the heights of technology seen elsewhere, Moche, Maya, Aztec, and Puebloan artists created a diversity of expressive figurines and glazed vessels.
When Spanish nobleman Marcelino Sanz de Sautuola described the paintings he discovered in a cave in Altamira, contemporizes declared the whole thing a modern fraud. Subsequent finds confirmed the validity of his claims and proved that Paleolithic people were skilled artists. Early artists used stone tools to engrave shapes into walls. They used pigments from hematite, manganese dioxide, and evergreens to achieve red, yelled, brown, and black colors. Brushes were made from feathers, leaves, and animal hair. Artists also used blowpipes to spray paint around hands and stencils.

The Gurukul was India’s first educational system. It was a residential schooling system that began approximately 5000 BC, in which the shisya (student) and guru (teacher) lived in the guru’s ashram (residence) or in close vicinity. This allows for the development of an emotional attachment prior to the transmission of knowledge. The ancient Sanskrit language was used as a means of communication.
The foundation of learning was not just reading books and memorising facts, but a child’s well-rounded, holistic development. Their mental, cognitive, physical, and spiritual well-being were all considered. Religion, holy scriptures, medicine, philosophy, warfare, statecraft, astrology, and other topics were covered.
The focus was on instilling human values in students, such as self-reliance, appropriate behaviour, empathy, creativity, and strong moral and ethical principles. The goal was for knowledge to be applied in the future to develop solutions to real-world challenges.
The Gurukul students’ six educational goals are as follows:
The acquisition of highest knowledge: The Gurukul education system’s ultimate goal was to understand Brahma (God) and the universe beyond sensual pleasures in order to achieve immortality.
Character development: The student developed will-power, which is a necessity for excellent character, as a result of their study of the Vedas (old scriptures), allowing them to develop a more positive attitude and outlook on life.
Development in all areas: The optimum approach for entire living was thought to be learning to withdraw the senses inside and practising introversion. While completing various jobs at the Gurukul, pupils were able to become aware of the inner workings of the mind, as well as their responses and reactions.
Social virtues: The learner was motivated to only tell the truth and avoid deception and lying by training his body, mind, and heart. This was regarded as the pinnacle of human morality. They were also encouraged to believe in charitable giving, which made them more socially responsible.
Spiritual development: Ancient literature, especially Yagyas, recommend introversion as the best approach for spiritual development (rituals). As a result, the learner spent time in reflection and isolation from the outside world in order to gain self-knowledge and self-realisation by looking fully within himself.
Students presented food to a pedestrian or a guest once a year as part of their cultural education. This act was regarded as a sacrifice comparable to one’s social and religious obligations to others.
Every child between the ages of three and eighteen is entitled to free and compulsory education under India’s Right to Education Act 2020.
According to India’s education statistics, over 26% of the population (1.39 billion) is between the ages of 0 and 14, which presents a significant opportunity for the primary education sector.
Furthermore, approximately 500 million people, or 18% of the population, are between the ages of 15 and 24, offering for prospects for expansion in India’s secondary and higher education institutions.
According to the Indian education data, the literacy rate for adults (15+ years) in India is 69.3%, with male literacy at 78.8% and female literacy at 59.3%.
Kerala has the highest literacy rate in India, with 96.2 percent as of 2018.
The University of Delhi is the most well-known Indian higher education institution, followed by the Indian Institute of Technology Bombay.
In the 2019 English Proficiency Index, India was ranked 34 out of 100 countries, allowing for easy distribution of educational materials that satisfy Universal standards.
Goals for India’s educational future
India joined the United Nations’ E9 programme in April 2021, which aims to build a digital learning and skills initiative for marginalised children and youth, particularly girls.
The Indian government allotted a budget of US7.56 billion towards school education and literacy in the Union Budget 2021-22.
India’s higher education system is expected to feature more than 20 universities among the top 200 universities in the world by 2030. With an annual research and development (R&D) budget of US$140 billion, it is expected to be among the top five countries in the world in terms of research production.
It is obvious that modern Indian education differs from that of the “Gurukula.” The curriculum is generally taught in English or Hindi, and computer technology and skills have been integrated into learning systems. The focus is more on competitive examinations and grades than moral, ethical, and spiritual education.
In the 1830s, Lord Thomas Babington Macaulay introduced the modern school system to India for the first time. Metaphysics and philosophy were deemed unnecessary in favour of “modern” subjects like science and mathematics.
Until July 2020, India’s education system was based on the 10+2 system, which awarded a Secondary School Certificate (SSC) after finishing class 10th and a Higher Secondary Certificate (HSC) after finishing class 12th.
This has been replaced by the 5+3+3+4 system as a result of the new National Education Policy (NEP). The phases have been divided to correspond to the stages of cognitive growth that a kid goes through naturally.
India’s obligatory education system is divided into four levels.
1. Establishing a foundation
According to the NEP, the five-year foundational stage of education consists of three years of preschool followed by two years of primary school. This stage will include the development of linguistic abilities as well as age-appropriate play or activity-based strategies.
We have a course called English in Early Childhood: Learning Language Through Play for people working in early education that can help you understand the importance of play in language development and how to use play to teach language skills to children in a fun way. With our free online course, you can also learn how to Prevent and Manage Infections in Childcare and Pre-School.
2. Stage of preparation
This three-year stage will continue to emphasise verbal development while also emphasising numeracy abilities. Classroom interactions will also be activity-based, with a strong emphasis on the aspect of discovery.
3. The middle stage
The three-year focus moves to critical learning objectives, such as experiential learning in the sciences, mathematics, arts, social sciences, and humanities, for classes six through eight.
4. The second stage
Students in grades 9 and 10, as well as grades 11 and 12, have a range of subject combinations to pick from and study, depending on their talents and interests.
Critical thinking, an open mind, and flexibility in the cognitive process are all encouraged at this level. Our course Volunteering in the Classroom: Bringing STEM Industry into Schools will boost your students’ thinking abilities while also encouraging their interest in the subject of STEM, which has a large skills deficit and hence has a great employment potential.
At the undergraduate stage, students can choose to study at this level from age 18 onwards. The majority of students attend a free public college or university, while others choose a private institution for their education. Indian college and university degrees in the field of agriculture, engineering, pharmaceutics and technology usually take four years to complete. Law, medicine and architecture can take up to five years.
Post-graduate study in India
Known as master’s courses or doctorate degrees, they can take from two up to three years to complete, respectively. Post-graduate education in India is largely provided by universities, followed by colleges and the majority of students are women. Post-graduate study allows students to specialise in a chosen field and conduct large amounts of research.
Adult education India
Adult education aims to improve literacy and move illiterate adults over the age of 21 along the path to knowledge. The National Literacy Mission Authority (NLMA) in India is in charge of supporting and promoting adult literacy programmes.
Our course Online Teaching: Creating Courses for Adult Learners offers everything you need to educate adults online if you’re an adult education provider or thinking about becoming one.
In India, distance education is available.
The School of Correspondence Courses and Continuing Education at Delhi University was the first to implement distance learning in India in 1962. The goal was to allow people who had the desire and aptitude to learn more and improve their professional skills to do so.
Significant gains in online education in India have been made and continue to be made as technology advances. Due to rising consumer demand and the pandemic’s effects, Indian higher education institutions are focusing on developing online programmes. By 2026, India’s online education market is expected to be worth $11.6 billion.
In India, homeschooling and blended learning are popular.
While homeschooling is not common in India, nor is it usually acknowledged, distant learning is becoming the new standard as a result of the epidemic. As a result, many children will learn at home while also attending school, a practise known as blended learning.
Our course Blended Learning Essentials for Vocational Education and Training provides a complete introduction to blended learning for teachers and trainers.
The Union Cabinet authorised a new National Education Policy (NEP) in July 2020, which will be fully implemented by 2040. They also changed the Ministry of Human Resource Development (HRD) to the Ministry of Education, which will serve as the sole regulator for all Indian schools and higher education institutions.
The NEP was initially drafted in 1964 by a 17-member Education Committee and ratified by Parliament in 1968. Its objective is to provide the framework and lead the development of education in India. It has been updated three times since then, the most recent being under Narendra Modi’s Prime Ministership.
The 2020 NEP’s five major changes in school and higher education
1. School will begin at age three: The Right to Education Act (RTE) will now cover free and compulsory schooling from age three up to 18 years, instead of six to 14 years. This brings early childhood education of ages three to five, for the first time, under the scope of formal schooling.
2. Students will be taught in their mother tongue: Although not compulsory, the NEP suggests students until class five should be taught in their mother tongue or regional language as a way to help children learn and grasp non-trivial concepts quicker.
3. One umbrella body for the entire higher education system: Under the Higher Education Commission of India (HECI), public and private higher education institutions will be governed by the same set of norms for regulation, accreditation and academic standards
4. Higher education becomes multidisciplinary: By 2040, all universities and colleges are expected to be multidisciplinary, according to the policy. Students will be able to create their own subject combinations based on their skill set and areas of interest.
5. There will be a variety of exit alternatives for undergraduate degrees: Colleges and universities in India are now permitted to offer a certificate after one year of study in a discipline or a diploma after two years of study under the new regulation. After completing a three-year programme, a bachelor’s degree is conferred.
Because of the proactive nature of the NEP, India’s education system is in sync with the global reforms in education brought about by Covid-19. We have various teaching tools accessible to help you create a better influence on your students’ lives and your teaching abilities, as blended learning appears to be the future of education in India.
We hope you’ve gotten a better understanding of the facts that make up India’s education system, whether it’s merely to broaden your horizons or to take advantage of the rapidly expanding Indian education sector.
Internet fraud is a sort of deception that involves the use of the internet. It is not a single fraud; rather, it is a collection of frauds. Internet fraudsters are omnipresent, and they are always coming up with new ways to defraud people and drain their bank accounts. We’ll talk about the many types of internet scams in this blog.

1. PHISHING OR AN EMAIL PHISHING SCAM
Fraudsters utilise this tactic to steal your personal information. Fraudsters send you emails impersonating as a legitimate or well-known organisation in this scam. The primary goal of the emails is to steal your financial information. A link or file is generally included in these emails. You will be directed to a phoney website if you click on those links. The false website will request critical information such as your credit card number, UPI code, and other bank account information. Furthermore, clicking on such links will infect your machine with a malware.
2. SCAMS IN ONLINE BUYING
It is one of the most significant online scams in recent years. Fraudsters use this method to build up bogus online shopping portals in order to defraud unsuspecting people of their hard-earned money. They display appealing products at a low price on their website. However, after paying for the transaction, either the fraudulent product is provided or the merchandise is not sent at all. There will be no return or refund procedures on these websites, and there will be no customer care personnel to contact.
3. THEFT OF PERSONAL INFORMATION
Identity theft occurs when criminals steal your personal information over the internet and use it to apply for a personal loan, a two-wheeler loan, or a bank credit card. When you take out a loan in your name, you are responsible for paying it back. Banks will give you a payback notification. If you do not repay the loan, your credit score will suffer and you will be labelled a loan defaulter.
Additionally, your stolen information might be utilized to construct phony social network accounts.
4. SCAMS INVOLVING WORK FROM HOME
The work-at-home scam is one of the most common types of online fraud. Fraudsters take advantage of those looking for work from home possibilities by suggesting that they may earn a lot of money by working from home for a few hours. Job searchers will be required to deposit a set amount of money for a job kit that will be useful for the employment in order to register for the scheme. There will be no record of employers once the money is deposited.
5. LOTTERY SWINDLE
Lottery fraud is one of India’s top three internet scams. Lottery fraud occurs when con artists phone you or send you emails and texts claiming you have won a lottery worth millions of rupees. You will be required to deposit money online in the name of tax in order to obtain the lottery money. When you visit phoney websites, you may be prompted to pay money. When you use those websites to make a payment, all of your card information is taken.
6. MATRIMONIAL DECEPTION
People use online matrimony sites to find their life partners in our fast-paced world. However, the sad reality is that many people lose lakhs of dollars when searching for their soulmates on matrimony websites. Innocent people are duped by fraudsters who create phoney profiles. In addition, various gangs have been formed to carry out this scam. First, the perpetrators persuade victims to trust them. Money is taken from the victims once the trust has been established.
7. TAX EVASION
This type of fraud usually occurs during tax season, when taxpayers are expecting a return. Taxpayers receive phoney refund SMS and emails from fraudsters pretending to be from the IRS. These messages are mostly delivered with the goal of gathering personal information such as I-T Department internet login credentials, bank account information, and so on. You will be requested to give sensitive bank information in order for the refund money to be credited to your bank account.
8. FRAUDLENT USE OF CREDIT CARD REWARD POINTS
Credit card firms offer reward points or loyalty points to encourage people to use their cards. Frauds involving credit card reward points have also been reported. Credit cardholders are contacted by fraudsters pretending to be from their credit card provider and offering to assist them in redeeming their credit card reward points. They generate a sense of urgency among cardholders by emphasising that the deal will expire soon. Cardholders will be required to enter their card details as well as an OTP in order to redeem their reward points. Fraudsters use these details to carry out fraudulent transactions.
9. OLX FRAUD
OLX fraud has become all too widespread, and many people have lost money while buying and selling items on the platform. Fraudsters pose as Army troops and place ads on the platform, which is a common occurrence on OLX. To gain people’s trust, fraudsters exploit army personnel’s stolen identification cards. They take money from the buyer in exchange for the claimed product, but they never deliver it. Fraudsters take advantage of the goodwill associated with the armed forces to defraud people of their hard-earned money.
10. SCAMS ON SOCIAL MEDIA
As the number of people utilizing social media grows, so does the number of social media hoaxes. Cyberbullying is one of the most common forms of social media fraud, and many youngsters have fallen victim to it. Cyberbullying is the use of social networking sites to bully people. There are also other more social media scams, such as a Facebook friend fraud.

The selling of products or services to other businesses and organizations is known as business-to-business marketing. It differs from B2C marketing, which is focused on customers, in various ways.
In general, B2B marketing content is more informative and simpler than B2C marketing content. This is because, in contrast to consumer purchases, company purchases are driven by bottom-line revenue impact. Return on investment (ROI) is rarely a financial factor for the average person, but it is a top priority for corporate decision makers.
Any business that sells to other businesses. B2B can take various forms, including subscriptions to software-as-a-service (SaaS), security solutions, tools, accessories, and office supplies, to mention a few. Many businesses come under both the B2B and B2C categories.
Any individual(s) who has power or influence over purchase decisions is the target of B2B marketing initiatives. From entry-level end-users to the C-suite, this can contain a wide range of titles and functions.
There is a lot of competition for clients and their attention. Building a successful B2B strategy needs careful planning, implementation, and administration. Here’s a high-level look at how B2B organisations differentiate themselves in a crowded market:
Step 1: Create a Big Picture Vision
If you don’t plan, you’re planning to fail. This axiom holds true indefinitely. Select defined and measurable business objectives before you start cranking out adverts and content. Then you’ll want to create or adopt a framework for achieving them through your B2B marketing strategy.
Step 2: Establish your target market and buyer personas
This is especially important for B2B companies. B2B items and services are typically marketed to a specific set of consumers with specific difficulties and demands, whereas B2C goods are often promoted to a larger and more general audience. The more precisely you can define this audience, the better you’ll be able to communicate with them directly. It’s a good idea to make a dossier for your target buyer persona. To qualify leads, conduct demographic research, interview industry experts, and study your best customers to develop a list of criteria that you can compare against prospects.
Step 3: Determine B2B Marketing Channels and Tactics
After you’ve gathered good information about your target audience, you’ll need to figure out how and where you’ll reach them. This one should be guided by the knowledge you gained in the previous stage. You’ll want to ask yourself questions about your ideal consumers and prospects, such as these:
Step 4: Develop Assets and Launch Campaigns
Now that you have a strategy in place, it’s time to put it into action. Make sure you’re following best practices for each channel you’re using in your approach. A unique strategy, relevant information, sophisticated targeting, and powerful calls to action are all essential parts in successful campaigns.
5th Step: Evaluate and Improve
This is a continuous procedure that keeps you on the right track. Simply put, you want to figure out why your high-performing content succeeds and your low-performing content fails. If you understand this, you’ll be able to spend your time and money more wisely. The more diligent you are about consulting analytics and applying what you’ve learned, the more likely you’ll be to keep improving and exceeding your objectives. Even with a solid research basis, creating content and campaigns entails a lot of guesswork until you have solid engagement and conversion statistics to work with.
Here are some of the most frequent B2B marketing methods and content formats to think about incorporating into your plan:
Blogs: Blogs are a must-have for practically any content marketing team. Regularly updated blogs increase your site’s organic visibility and boost inbound visitors. Your blog may accommodate a wide range of material types and formats.
Search: SEO recommended practises change as frequently as Google’s algorithm (which is a lot), making this a difficult space to navigate, but any B2B marketing strategy must account for it. In recent months, the emphasis has shifted away from keywords and metadata and toward searcher intent signals.
Social Media: Both organic and sponsored social media should be included in the mix. You can reach out to prospects on social media and engage them where they are. B2B buyers are increasingly turning to these platforms to research potential vendors before making a purchase.
Whitepapers, eBooks, and infographics: Whitepapers, eBooks, and infographics are all good options. These downloaded papers can be gated (meaning a user must give contact information or perform another action to get access) or ungated (meaning a user must supply contact information or perform another action to gain access). Frequently used to generate B2B leads.
Email: Email will not go away anytime soon, even though its usefulness is fading in the age of spam filters and inbox shock.
Video: This content type can be used in several of the preceding categories (blogs, social media, and emails), but it’s worth mentioning because it’s at the heart of many effective B2B initiatives.
Livestream events and Webinars: LinkedIn Live videos receive 7x more reactions and 24x more comments on average than native video generated by the same presenters during livestream events and seminars. LinkedIn Live is useful for more than just promoting an event. Use this feature to demonstrate expertise, showcase innovation, or provide a behind-the-scenes look into your company’s culture to LinkedIn members.
Case studies and customer testimonials: Case studies and customer testimonials are essential for B2B marketing strategists to establish credibility. Customer testimonials and case studies aren’t the most imaginative endeavours, but they’re essential nonetheless.
Podcasts: Podcasting is expected to grow in popularity even more than it currently has. Do you have a podcast aimed towards professionals? Are you considering starting one? Increase your podcast’s listenership by promoting it on LinkedIn.
How can you set yourself up for success in B2B marketing? Here are a few tried-and-true pillars to help your team stand out and make an impression.
Be Human
Yes, you’re attempting to gain a consumer, but you’re not marketing to a building or an intangible thing. You’re attempting to communicate with genuine employees, who, like any other human being, are motivated by emotional and cognitive factors.
Don’t limit your research to the firms and accounts you’re interested in. Learn about the people who work there, and tailor your marketing to their needs. Although business decisions are more sensible and logical, that doesn’t mean your content and tone should be robotic.
Target with Both Precision and Volume in Mind
Multiple stakeholders affect the majority of B2B purchasing decisions. When it comes to targeting, one of the most typical blunders is attempting to pinpoint the decision maker. However, in almost all cases, that one decision maker does not exist. As a result, it’s critical to target all stakeholders who may have an impact on the purchasing decision.
B2B buying cycles are complicated, and stakeholders’ professions and roles are continuously changing. This is only one of many reasons why brand familiarity is so important. The following tools can assist B2B marketers in reaching out to decision-makers who can both influence and authorise purchases. They let you get as specific as you want, and you can use sophisticated automation to extend your target group as necessary.
Keep Context in Mind
Today, personalization and relevance are required to gain attention. Yes, you want to speak your consumers’ language, but you also want to present content and advertising that are thematically appropriate for where they’re being viewed. Shorter videos with rapid hooks, for example, perform better on social media feeds, whereas a longer style is most likely better suited for YouTube. Catching someone looking through LinkedIn requires a different text angle than catching someone scrolling through other social media platforms. Put yourself in the shoes of the end user. When they’re watching your content, try to comprehend their current position, including their “surroundings,” and fit your message with their attitude.
LinkedIn is the most-used social media network for B2B marketers, according to the CMI and MarketingProfs report B2B Content Marketing 2021: Benchmarks, Budgets, and Trends (at 96 percent ).
LinkedIn was also the leading paid social media site for B2B marketing. The most recent survey did not ask respondents which paid platform had the best results, although respondents in the prior survey said LinkedIn.
At a basic level, we strongly advocate that every B2B company create an optimised LinkedIn Page, which you can do for free on LinkedIn, since this will serve as your brand’s hub on the platform and a popular location for buyer research. Posting updates on a regular basis will keep you top of mind with your target audience and help you gain followers. There are a variety of LinkedIn marketing solutions and services you can use to target and engage the ideal users for maximum business impact and B2B marketing ROI.
Native Ads
Native adverts are referred to as Sponsored Content on LinkedIn. These adverts show alongside the user-generated material that LinkedIn members come to see. For thought leadership, brand recognition, and driving strategic traffic, this is a great tool.
Lead Generation
Many B2B marketers are judged on their lead generation abilities. Because they pre-populate the viewing member’s LinkedIn profile data and don’t require the user to leave the site, Lead Gen Forms are particularly useful for this purpose. It’s a win-win situation for both marketers and members. When it comes to accessing deals and information, members get a consistent experience. Lead data is of excellent quality for B2B marketers.
Retargeting
The LinkedIn Insight Tag allows you to track LinkedIn visitors that come to your website and promote to them while they’re there. These people are more likely to be interested in your business and goods, increasing your conversion chances.
Message Ads
LinkedIn Message Ads are becoming more advantageous as reaching professional inboxes (and sometimes even finding email addresses) becomes increasingly difficult. You can use this feature to send personalised direct messages to LinkedIn members, even if you aren’t linked yet.
Dynamic Ads
These ads are tailored to the individual who is viewing them. To stand out and grab attention, they instantly populate with profile photographs and essential details.
Here are some essential factors to bear in mind as we summarise the most important conclusions from our investigation of modern B2B marketing:

Every business, whether for profit or not, public or private, needs well-trained and experienced staff to carry out the operations necessary to meet the organization’s objectives.
Employees must be trained to improve their skill levels as well as their versatility and adaptability.
Inadequate work performance, productivity declines, changes resulting from job restructuring, or technological breakthroughs all necessitate some form of training and development.
A training is not a one-size-fits-all event; rather, it is a step-by-step procedure that can only be finished after all of the required tasks have been done successfully.
Under organisational analysis the following elements are studied:
(i) Organisational Analysis:
(a) Analysis of Objectives and Strategies:
The entire organisation is examined in terms of its goals, resources, resource allocation and utilisation, growth potential, and the environment in this analysis. The goal of this analysis is to establish where in the organisation training should be prioritised.
(b) Resource Utilisation Analysis:
The major goal of this investigation is to see how organisational resources are used. This analysis looks at the contributions of several departments by generating efficiency indices for each unit, which aid in estimating the human resource contribution.
(c) Environmental Analysis:
This analysis looks at the organization’s economic, social, political, and technological surroundings. The major goal of this analysis is to determine the organization’s controllable and uncontrolled components.
(d) Organisational Climate Analysis:
The attitude of management and employees is examined in this analysis, as the support of management and their attitude toward employees is required for planning and implementing the training programme.
(ii) Role or Task Analysis:
It is a thorough assessment of all facets of the profession. It investigates the numerous operations as well as the conditions in which they are to be carried out.
Following procedure is involved in the task analysis:
(a) The duties and responsibilities of the task in question are listed using the job description as a guide.
(b) Creating a list of the job’s performance standards.
(c) Making a comparison between the actual and expected results.
(d) Identifying the components of the task that are causing problems in the effective performance of the job if there is a gap between the two.
(e) Identifying the training requirements to address the issues.
(iii) Manpower Analysis:
The fundamental goal of this examination is to examine the individual’s abilities, skills, and growth and development. The manpower analysis aids in determining an individual’s strengths and shortcomings. It also aids in deciding whether or not he requires training. If that’s the case, what kind of instruction does he need?
The various sources of such information are as follows:
(a) Employee observation in the workplace.
(b) Conducting an interview with the employee’s boss and coworkers.
(c) The employee’s personal files.
(d) Tests and records of production. These sources will supply information on the employee’s current skills and attitude, which he should have.
2. Preparing the Training Programme:
The second step in the training process is to construct the training programme to suit these needs after determining the training needs.
The training programme should take into account the following considerations:
(i)New and experienced trainees
(ii) The kind of training materials that are needed
(iii) A person who will provide training as a resource
(iv) A training programme that is either on-the-job or off-the-job
(v) The length of the training programme
(vi) The training method.
3. Preparing the Learners:
The trainees who will participate in the training programme must be well-prepared for it. They will not be interested in learning the main components of the training programme if they are not prepared. As a result, learners should be adequately prepared so that they may get the most out of the training session.
Following steps are required for the preparation of learners for the training programme:
(i)Making the students feel at ease, especially if they are beginners, so that they are not frightened on the job.
(ii) Ensuring that the learners comprehend the relevance of the job and how it relates to the overall process.
(iii) Assisting learners in comprehending the training’s demands and objectives in respect to their jobs.
(iv) Creating interest in the training programme among learners to motivate them to learn.
(v) If on-the-job training is used, trainees should be placed as close to their employment as practicable.
(vi) Getting the students acquainted with the equipment, materials, and tools, among other things.
4. Implementing Training Programme:
This is the training program’s action phase. The trainer teaches and illustrates the new methods and knowledge to the learners during this phase. At this stage, the students are exposed to a variety of training exercises. To make the training a successful learning experience for the employees, the main topics are emphasised and one item is explained at a time.
To keep the learners’ attention in the training programme, audio-visual aids are employed to exhibit and illustrate, and the trainer encourages them to ask questions.
5. Performance Try Out:
The learner is asked to repeat the job multiple times, slowly, at this point. The trainees’ errors are addressed, and the technical and tough portions are explained again if necessary.
6. Evaluation of the Training Programme:
Training evaluation is an attempt to acquire information (feedback) on the impacts of a training programme and determine the training’s worth in light of that information. While organisations may spend a lot of money and time developing and implementing training programmes, the evaluation aspect is sometimes overlooked. This could be due to the assumption that determining the efficiency of training is difficult, if not impossible.
Only a comprehensive assessment of the real change in behaviour and performance on the job, over a long period of time, can determine the true success of training and development activities. As a result, the fundamental goal of training is to impart new knowledge, skills, and change in attitude and behaviour.
If training does not result in changes in any of these areas, it is completely useless. As a result, training is solely evaluated in terms of changes in skills, knowledge, attitude, and behaviour.
Under organisational analysis the following elements are studied:
(i) Organisational Analysis:
“Meditation can wipe away the day’s stress, bringing with it inner peace. See how you can easily learn to practice meditation whenever you need it most.”
Mayo Clinic Staff
If stress makes you feel uncomfortable, tense, or worried, try meditation. Even a few minutes of meditation might help you regain your sense of calm and inner serenity.
Meditation is something that everybody can do. It’s easy to accomplish and doesn’t cost a lot of money, and it doesn’t require any special equipment.
And you can meditate anywhere you are: on a walk, on the bus, in line at the doctor’s office, or even in the middle of a tense work meeting.

For thousands of years, people have been meditating. Meditation was created to aid in the comprehension of life’s sacred and mystical powers. Meditation is widely utilised these days for relaxation and stress reduction.
Meditation is a sort of supplementary treatment for the mind and body. Meditation can help you achieve a deep state of relaxation as well as a calm mind.
During meditation, you concentrate your attention and clear your mind of the muddled thoughts that may be bothering you and producing stress. Physical and emotional well-being may be improved as a result of this process.
Meditation can help you achieve a sense of quiet, peace, and balance, which can improve your emotional well-being as well as your general health.
And the advantages don’t stop when you stop meditating. Meditation can help you stay calmer throughout the day and may even aid in the management of symptoms associated with some medical problems.
Meditation and Emotional Well-Being
When you meditate, you can rid your mind of the information overload that accumulates throughout the day and contributes to stress.
The following are some of the emotional advantages of meditation:
Meditation and Illness
If you have a medical problem, especially one that is exacerbated by stress, meditation may be beneficial.
Despite the fact that a growing body of scientific evidence supports the health advantages of meditation, other academics say it is still too early to draw judgments about its potential benefits.
In light of this, some study suggests that meditation may aid in the management of symptoms associated with diseases such as:
If you have any of these conditions or other health issues, talk to your health care practitioner about the benefits and drawbacks of meditation. Meditation has been shown to exacerbate symptoms of mental and physical illnesses in certain people. Traditional medical care is not replaced by meditation. However, it can be a good complement to your current treatment.
Meditation is a broad phrase that encompasses a variety of approaches to achieving a calm state of mind. Meditation can be found in a wide range of relaxation and meditation techniques. All of them are striving for the same thing: inner serenity.
Meditation can be done in a variety of ways, including:
Without the need of attention or effort, this type of meditation may help your body to settle into a state of profound rest and relaxation and your mind to achieve a state of inner peace.
Distinct styles of meditation may have different qualities to assist you in your meditation. These may differ depending on who you follow for advice or who is giving a lesson. The following are some of the most common elements of meditation:
Don’t let the prospect of meditating “properly” add to your anxiety. You can go to dedicated meditation facilities or group programmes guided by certified instructors if you want to. However, you may easily practise meditation on your own.
And you may make meditation as formal or informal as you want, depending on your preferences and circumstances. Some people make it a habit to meditate every day. They could, for example, meditate for an hour at the start and finish of each day. However, all you truly need is a few minutes of great meditation time.
Here are some methods for practising meditation on your own whenever you want:
Concentrate solely on your breathing. As you inhale and exhale through your nose, focus on feeling and listening. Slowly and deeply inhale. When your mind wanders, gently bring it back to your breathing.

With each transaction, successful businesses produce value for their customers in the form of satisfaction, as well as for themselves and their shareholders in the form of profit. Companies that provide more value with each sale have a better chance of profiting than those that produce less value. It’s vital to understand your company’s value chain in order to assess how much value it generates.
Here’s an overview of what a value chain is, why it’s important to understand it, and how you can use it to help your business create and keep more value from its sales.
The phrase “value chain” refers to all of the commercial activities and procedures that go into making a product or providing a service. A value chain can span various stages of a product’s lifecycle, from research and development through sales and all in between. In his book The Competitive Advantage: Creating and Sustaining Superior Performance, Harvard Business School Professor Michael Porter developed the notion.
Taking stock of the processes that make up your company’s value chain will give you a better understanding of what goes into each transaction. Your organisation can be better positioned to share more value with consumers while capturing a larger portion of the value created at each point in the chain by maximising the value created at each point in the chain. Similarly, understanding how your company creates value can help you better appreciate its competitive edge.
All of the activities that make up a firm’s value chain, according to Porter’s concept, can be divided into two groups that contribute to its margin: primary activities and support activities.
Primary activities are those that directly contribute to the development of a product or the delivery of a service, such as:
Secondary activities are divided into the following categories to help primary operations become more efficient, hence creating a competitive advantage:
Value chain analysis is a method of assessing each activity in a company’s value chain to determine where improvements might be made.
A value chain analysis forces you to analyse how each step contributes to or detracts from the value of your end product or service. As a result, you may be able to gain a competitive edge, such as:
In most cases, improving one of the four secondary activities will help at least one of the primary activities.
3. Identify Opportunities for Competitive Advantage
You may assess your value chain through the lens of whatever competitive advantage you’re seeking to
acquire once you’ve compiled it and understand the cost and value associated with each stage.
If your primary goal is to lower your company’s costs, for example, you should assess each component
of your value chain through the lens of cost reduction. Which steps could be made more productive?
Are there any that don’t add much value and could be outsourced or deleted entirely to save money?
Similarly, if product differentiation is your primary goal, which portions of your value chain provide the
best potential to achieve that goal? Would the added value justify the expenditure of more resources?
You can identify multiple opportunities for your company through value chain analysis, which can be tough to prioritise. It’s usually better to start with the changes that require the least amount of effort yet provide the highest return on investment.
Security services guarantee protecting agents against attacks. During agent’s transportation the code is protected as a usual file. At the host site, the agent is open for modifications and very specific methods must be applied for protection.

A processing or communication service that is provided by a system to give a specific kind of protection to resources, where said resources may reside with said system or reside with other systems, for example, an authentication service or a PKI-based document attribution and authentication service. A security service is a superset of AAA services. Security services typically implement portions of security policies and are implemented via security mechanisms.
Facility Management services are designed and delivered according to the customers need; be it housekeeping services, janitorial support, HVAC repairs or pest control. We have shown significant growth over the past few years to become the 4th largest player in this space with a nationwide presence like few others. The clients are as diverse as our services, ranging from households, businesses and industrial establishments.
Cyber security to companies that provide services via the internet. Such as protecting the account of the company, their customer data, and their infrastructure. The cyber security service is based on the protection of computer data, networks, and identity management of companies.
Cyber security services are a branch of technology that protects the computer hardware, software, data and networks from unauthorized cyber attacks from internal and external sources. It is becoming increasingly more important over time, as we face more and more cyber-attacks from a variety of sources. The primary requirement of a cyber security service is to keep the systems and networks safe from external attacks.
Security Services:
A processing or communication service that enhances the security of the data processing systems and the information transfers of an organization. These services are intended to counter security attacks, and they make use of one or more security mechanisms to provide the service. Following are the five categories of these services:
Authentication: The assurance that the communicating entity is the one that it claims to be.
Data Confidentiality: Protects data from unauthorized disclosure.
Access Control: The prevention of unauthorized use of a resource (i.e., this service controls who can have access to a resource, under what conditions access can occur, and what those accessing the resource are allowed to do).
Data Integrity: The assurance that data received are exactly as sent by an authorized entity (i.e., contain no modification, insertion, deletion, or replay).
Non-repudiation: Protects against denial by one of the entities involved in a communication of having participated in all or part of the communication.
There are different types of security services that service providers give to the firms, organisations, or individuals.
The observable universe is consists up to two trillion galaxies that are made of billions and billions of stars. In the Milky Way galaxy alone, scientists assume that there are some 40 billion earths like planets in the habitable zone of their stars. When you look at these numbers, there are a lot of possibilities of alien civilization to exist. In a universe that big and old, the possibilities of civilizations may start millions of years apart from each other, and develop in different directions and speed. So their civilization may range from cavemen to super advanced. We know that human started out with nothing and then making tools, building houses, etc. we know that humans are curios, competitive, greedy for resources, and expansionists. The more of these qualities that our ancestors had, the more successful they were in the civilization building process.

Like this, the other alien civilizations also must have evolved. Human progress can be measured very precisely by how much energy we extracted from our environment. As our energy consumption grew exponentially, so did the abilities of our civilization. Between 1800 and 2015, population size had increased sevenfold; while humanity was consuming 25 times more energy. It’s likely that this process will continue into the far future. Based on these facts, scientist Nikolai Kardashev developed a method for categorizing civilizations, from cave dwellers to gods ruling over galaxies into a scale called the Kardashev scale. It is a method of ranking civilizations by their energy use. It put civilizations into four categories. A type 1 civilization is able to use the available energy of their home planet. A type 2 civilization is able to use the available energy of their star and planetary system. A type 3 civilization is able to use the available energy of their galaxy. A type 4 civilization is able to use the available energy of multiple galaxies
. It’s like comparing an ant colony to a human metropolitan area. To ants we are so complex and powerful, we might as well be gods. On the lower end of the scale, there are type 0 to type 1 civilization. Anything from hunting, gatherers to something we could achieve in the next few hundred years. These might actually be abundant in the Milky Way. If that possible, why they are not sending any radio signals in space. But even if they transmitted radio signals like we do, it might not be very helpful. In such a vast universe, our signals may extend over 200 light years, but this is only a tiny fraction of the Milky Way. And even if someone were listening, after a few light years our signals decay into noise, impossible to identify as the source of an intelligent species. Today humanity ranks at about level 0.75. We created huge structures, changed the composition and temperature of the atmosphere. If progress continues, we will become a full type 1 civilization in the next few hundred years. The next step to type 2 is trying and mine other planets and bodies.

As a civilization expands and uses more and more stuff and space, at some they may start a largest project that extracting the energy of their star by building a Dyson swarm. Once it finished, energy has become unlimited. The next frontier moves to other stars light years away. So the closer a species gets to type 3, they might discover new physics, may understand and control dark matter and energy, or be able to travel faster than light. For them, humans are the ants, trying to understand the galactic metropolitan area. A high type 2 civilization might already consider humanity too primitive. A type 3 civilization might consider us bacteria. But the scale doesn’t end here; some scientists suggest there might be type 4 and type 5 civilizations, whose influences stenches over galaxy clusters or super clusters. This complex scale is just a thought experiment but, still it gives interesting things. Who knows, there might be a type omega civilization, able to manipulate the entire universe, and they even might be the actual creators of our universe.
The James Webb space telescope or JWST will replace the Hubble space telescope. It will help us to see the universe as it was shortly after the big bang. It was named after the second head of NAS James Webb. James Webb headed the office of space affairs from 1961 to 1968. This new telescope was first planned for launch into orbit in 2007 but has since been delayed more than once, now it’s been scheduled for 18 December 2012. After 2030 the Hubble will go on a well deserved rest since its launch in 1990 its provided more than a million images of thousands of stars, nebulae, planets and galaxies. The Hubble captured images of stars that are show about 380 million years after the big bang which supposedly happened 13.7 billion years ago. These objects may no longer exist, we still see their light. Now we expect James Webb to show us the universe as it was only 100 to 250 million years after its birth. It can transform our current understanding of the structure of the universe. The Spitzer space telescope and Hubble telescopes have collected data of gas shells of about a hundred planets. According to experts, the James Webb is capable of exploring the atmospheres of more than 300 different exoplanets.

The working of James Webb space telescope
The James Webb is an orbiting infrared observatory that will investigate the thermal radiation of space objects. When heated to a certain temperature, all solids and liquids emit energy in the infrared spectrum; here there is a relationship between wavelength and temperature. The higher the temperature, there will shorter the wavelength and higher the radiation intensity. James Webb sensitive equipment will be able to study the cold exoplanets with surface temperatures of up to 27° Celsius. An important quality of this new telescope is that it will revolve around the sun and not the earth unlike Hubble which is located at an altitude of about 570 kilometers in low earth orbit. With the James Webb orbiting the sun, it will be impossible for the earth to interfere with it, however he James Webb will move in sync with the earth to maintain strong communication yet the distance from the James Webb to the earth will be between about 374,000 to 1.5 million kilometers in the direction opposite of the sun. So its design must be extremely reliable.
The James Webb telescope weighs 6.2 tones. The main mirror of the telescope is with a diameter of 6.5 meters and a colleting area of 25 square meters, it resembles a giant honeycomb consisting of 18 sections. Due to its impressive size, the main has to be folded for start up; this giant mirror will capture light from the most distant galaxies. The mirror can create a clear picture and eliminate distortion. A special type of beryllium was used in the mirror which retains its shape at low cryogenics temperature. The front of the mirror is covered with a layer of 48.25 grams of gold, 100 nanometers thick; such a coating best reflects infrared radiation. A small secondary mirror opposite the main mirror, it receives light from the main mirror and directs it to instruments at the rear of the telescope. The sunshield is with a length of 20 meters and width of 7 meters. It composed of very thin layers of kapton polyimide film which protects the mirror and tools from sunlight and cools the telescope’s ultra sensitive matrices to 220° Celsius.

The NIRCam- Near Infrared Camera is the main set of eyes of the telescope, with the NIRCam we expect to be able to view the oldest stars in the universe and he planets around them. The nurse back near infrared spectrograph will collect information on both physical and chemical properties of an object. And the MIRI mid-infrared instrument will allow you to see stars being born many unknown objects of the Kepler belt. Then the near infrared imager and sliteless spectrograph or NIRIIS camera is aimed at finding exoplanets and the first light of distant objects. Finally the FGS- Fine Guidance Sensor helps accurately point the telescope for higher quality images updates its position in space sixteen times per second and controls the operation the steering and main mirrors. They are planning to launch the telescope with the help of the European launch vehicle Ariana 5 from the kourou Cosmodrome in French Guiana space center. The device is designed for between 5 to 10 years of operation but, it may serve longer. If everything goes well, $10 billion worth of construction and one year of preparation will have finally started in orbit.
Treating illness b using tools to remove or manipulate pats of the human body is an old idea. Even the minor operations carried high risks, but that doesn’t mean all early surgery failed. Indian doctors, at the beginning centuries before the birth of Christ, successfully removed tumors and performed amputations and other operations. They developed dozens of metal tools, relied on alcohol to dull the patient, and controlled bleeding with hot oil and tar. The 20th century brought even more radical change through technology. Advances in fiber optic technology and the miniaturization of video equipment have revolutionized surgery. The laparoscopy is the James Bond like gadget of the surgeon’s repertoire of instruments. Only a small incision through the patient’s abdominal wall is made into which the surgeon puffs carbon dioxide to open up the passage.

Using a laparoscope, a visual assessment and diagnosis, and even surgery causes less physiological damage, reduces patient’s pain and speeds their recovery leading to shorter hospital stays. In the early 1900s, Germany’s George Kelling developed a surgical technique in which he injected air into the abdominal cavity and inserted a cytoscope – a tube like viewing scope to assess the patient’s innards. In late 1901, he began experimenting and successfully peered into a dog’s abdominal cavity using the technique. Without cameras, laparoscopy’s use limited to diagnostic procedures carried out by gynecologists and gastroenterologists. By the 1980s, improvements in miniature video devices and fiber optics inspired surgeons to embrace minimally invasive surgery. In 1996, the first live broadcast of a laparoscopy took place. A year later, Dr. J. Himpens used a computer controlled robotic system to aid in laparoscopy. This type of surgery is now used for gallbladder removal as well as for the diagnosis and surgeries of fertility disorder, cancer, and hernias.
Hypothermia is a drop in body temperature significantly below normal can be life threatening, as in the case of overexposure to severe wintry conditions. But in some cases, like that of Kevin Everett of the buffalo bills, hypothermia can be lifesaver. Everett fell to the ground with a potentially crippling spinal cord injury during a 2007 football game. Doctors treating him on the field immediately injected his body with a cooling fluid. At the hospital, they inserted a cooling catheter to lower his body temperature by roughly five degrees, at the same time proceeding with surgery to fix his fractured spine. Despite fears that he would be paralyzed, Everett has regained his ability to walk, and advocates of therapeutic hypothermia feel his lowered body temperature may have made the difference. Therapeutic hypothermia is still a controversial procedure. The side effects of excessive cooling include heart problems, blood clotting, and increased infection risk. On the other hand, supporters claim, it slows down cell damage, swelling, and other destructive processes well enough that it can mean successful surgery after a catastrophic injury. Surgical lasers can generate heat up to 10,000°F on a pinhead size spot, sealing blood vessels and sterilizing. Surgical robots and virtual computer technology are changing medical practice. Robotic surgical tools increase precision. In 1998, heart surgeons at Paris’s Broussais hospital performed the first robotic surgery. New technology allows an enhanced views and precise control of instruments.
“After a complex laparoscopic operation, the 65-year-old patient was home in time for dinner”. – Elisa Birnbaum, surgeon
Thomas Newcomen, a Devonshire blacksmith, developed the first successful steam engine in the world and used it to pump water from mines. His engine was a development of the thermic siphon built by Thomas Savery, whose surface condensation patents blocked his own designs. Newcomen’s engine allowed steam to condense inside a water-cooled cylinder, the vacuum produced by this condensation being used to draw down a tightly fitting piston that was connected by chains to one end of a huge, wooden, centrally pivoted beam. The other end of the beam was attached by chains to a pump at the bottom of the mine. The whole system was run safely at near atmospheric pressure, the weight of the atmosphere being used to depress the piston into the evacuated cylinder.

Newcomen’s first atmospheric steam engine worked at conygree in the west midlands of England. Many more were built in the next seventy years, the initial brass cylinders being replaced by larger cast iron ones, some up to 6 feet (1.8 m) in diameter. The engine was relatively inefficient, and in areas where coal was not plentiful was eventually replaced by double-acting engines designed by James Watt. These used both sides of the cylinder for power strokes and usually had separate condensers. James watt was responsible for some of the most important advances in steam engine technology.
In 1765 watt made the first working model of his most important contribution to the development of steam power, he patented it in 1769. His innovation was an engine in which steam condensed outside the main cylinder in a separate condenser. The cylinder remained at working temperature at all times. Watt made several other technological improvements to increase the power and efficiency of his engines. For example, he realized that, within a closed cylinder, low pressure steam could push the piston instead of atmospheric air. It took only a short mental leap for watt to design double-acting engine in which steam pushed the piston first one way, then the other, increasing efficiency still further.
Watt’s influence in the history of steam engine technology owes as much to his business partner, Matthew Boulton, as it does to his own ingenuity. The two men formed a partnership in 1775, and Boulton poured huge amount of money into watt’s innovations. From 1781, Boulton and watt began making and selling steam engines that produced rotary motion. All the previous engines had been restricted to a vertical, pumping action. Rotary steam engines were soon the most common source of power for factories, becoming a major driving force behind Britain’s industrial revolution.

By the age of nineteen, Cornishman Richard Trevithick worked for the Cornish mining industry as a consultant engineer. The mine owners were attempting to skirt around the patents owned by James Watt. William Murdoch had developed a model steam carriage, starting in 1784, and demonstrated it to Trevithick in 1794. Trevithick thus knew that recent improvements in the manufacturing of boilers meant that they could now cope with much higher steam pressure than before. By using high pressure steam in his experimental engines, Trevithick was able to make them smaller, lighter, and more manageable.
Trevithick constructed high pressure working models of both stationary and locomotive engines that were so successful that in 1799 he built a full scale, high pressure engine for hoisting ore. The used steam was vented out through a chimney into the atmosphere, bypassing watt’s patents. Later, he built a full size locomotive that he called puffing devil. On December 24, 1801, this bizarre-looking machine successfully carried several passengers on a journey up Camborne hill in Cornwall. Despite objections from watt and others about dangers of high pressure steam, Trevithick’s work ushered in a new era of mechanical power and transport.
In the 1800s, scientists discovered the realm of light beyond what is visible. The 20th century saw dramatic improvements in observation technologies. Now we are probing distant planets, stars, galaxies and black holes where even light would take years to reach. So how we do that? Light is the fastest thing we know in the universe. It is so fast that we measure enormous distances by how long it takes for light to travel them. In one year, light travels about 6 trillion miles. It is the distance, we call one light year. The Apollo 11 had to travel four days to reach the moon but, it is one light second from earth. Meanwhile, the nearest star beyond our own sun is Proxima Centauri but, it is 4.24 light years away. Our Milky Way galaxy is on the order of 100,000 light years across. The nearest galaxy to our own, Andromeda is about 2.5 million light years away.

The question is how do we know the distance of these stars and galaxies? For objects that are very close by, we can use a concept called trigonometric parallax. When you place your thumb and close your left eye and then, open your left eye and close your right eye. It will look like your thumb has moved, while more distant objects have remained in place. This same concept applies in measuring distant stars. But they are much farther than the length of your arm, and earth is not large enough, even if you had different telescopes across the equator, you would not see much of a shift in position. So we look at the change in the star’s apparent location over six months, when we measure the relative positions of the stars in summer, and then again in winter, nearby stars seem to have moved against the background of the more distant stars and galaxies.
But this method only works for objects less than a few thousand light years away. So, for such distances, we use a different method using indicators called standard candles. Standard candles are objects whose intrinsic brightness, or luminosity that we know well. For example, if you know how bright your light bulb is, even when you move away from it, you can find the distance by comparing the amount of light you received to the intrinsic brightness. In astronomy, we consider this as a special type of star called a Cepheid variable. These stars will constantly contract and expand. Because of this, their brightness varies. We can calculate the luminosity by measuring the period of this cycle, with more luminous stars changing more slowly. By comparing the light that we received to the intrinsic brightness we can calculate the distance.

But we can only observe individual stars up to about 40 million light years away. So we have to use another type of standard candle called type 1a supernova. Supernovae are giant stellar explosions which is one of the ways that stars die. These explosions are so bright, that they outshine the galaxies where they occur. So we can use the type 1 a supernovae as standard candles. Because, intrinsically bright ones fade slower than fainter ones. With the understanding of brightness and decline rate, we can use the supernovae to probe distances up to several billions of light years away. But is the importance of seeing distant objects? Well, the light emitted by the sun will take eight minutes to reach us, which means that the light we see now is a picture of the sun eight minutes ago. And the galaxies are million light years away. It has taken millions of years for that light to reach us. So the universe is in some kind of an inbuilt time machine. The further we can look back, the younger we are probing. Astrophysicists try to read the history of the universe, and understand how and where we come from.
“Dream in light years, challenge miles, walk step by step” – William Shakespeare
Why do waves form?
A wave begins as the wind ruffles the surface of the ocean. When the ocean is calm and glasslike, even the mildest breeze forms ripples, the smallest type of wave. Ripples provide surfaces for wind to act on, which produces larger waves. Stronger winds push the nascent waves into steeper and higher hills of water. The size a wave reaches depends on the speed and strength of the wind. The length of time it takes for the wave to form, and the distance over which it blows in the open ocean is known as the fetch. A long fetch accompanied by strong and study winds can produce enormous waves. The highest point of a wave is called the crest and the lowest point the trough. The distance from one crest to another is known as the wavelength.

Although water appears to move forward with the waves, for the most part water particles travel in circles within the waves. The visible movement is the wave’s form and energy moving through the water, courtesy of energy provided by the wind. Wave speed also varies; on average waves travel about 20 to 50 Mph. Ocean waves vary greatly in height from crest to trough, averaging 5 to 10 feet. Storm waves may tower 50 to 70 feet or more. The biggest wave that was ever recorded by humans was in Lituya bay on July 9th, 1958. Lituya bay sits on the southeast side of Alaska. A massive earthquake during the time would trigger a mega tsunami and the tallest tsunami in modern times. As a wave enters shallow water and nears the shore, it’s up and down movement is disrupted and it slows down. The crest grows higher and be gins to surge ahead of the rest of the wave, eventually toppling over and breaking apart. The energy released by a breaking wave can be explosive. Breakers can wear down rocky coast and also build up sandy beaches.
Why does a tide occur?
Tides are the regular daily rise and fall of ocean waters. Twice each day in most locations, water rises up over the shore until it reaches its highest level, or high tide. In between, the water recedes from the shore until it reaches its lowest level, or low tide. Tides respond to the gravitational pull of the moon and sun. Gravitational pull has little effect on the solid and inflexible land, but the fluid oceans react strongly. Because the moon is closer, its pull is greater, making it the dominant force in tide formation.
Gravitational pull is greatest on the side of earth facing the moon and weakest on the side opposite to the moon. Nonetheless, the difference in these forces, in combination with earth’s rotation and other factors, allows the oceans to bulge outward on each side, creating high tides. The sides of earth that are not in alignment with the moon experience low tides at this time. Tides follow different patterns, depending on the shape of the seacoast and the ocean floor. In Nova Scotia, water at high tide can rise more than 50 feet higher than the low tide level. They tend to roll in gently on wide, open beaches in confined spaces, such as a narrow inlet or bay, the water may rise to very high levels at high tide.
There are typically two spring tides and two narrow tides each month. Spring tie of great range than the mean range, the water level rises and falls to the greatest extend from the mean tide level. Spring tides occur about every two weeks, when the moon is full or new. Tides are at their maximum when the moon and the sun are in the same place as the earth. In a semidiurnal cycle the high and low tides occur around 6 hours and 12.5 minutes apart. The same tidal forces that cause tides in the oceans affect the solid earth causing it to change shape by a few inches.
When a massive star dies, it leaves a small but dense remnant core in its wake. If the mass of the core is more than 3 times the mass of the sun, the force of gravity overwhelms all other forces and a black hole is formed. Imagine the size of a star is 10times more massive than our sun being squeezed into a sphere with a diameter equal to the size of New York City. The result is a celestial object whose gravitational field is so strong that nothing, not even light can escape it. The history of black holes was started with the father of all physics, Isaac Newton. In 1687, Newton gave the first description of gravity in his publication, Principia mathematica, that would change the world. Then 100 years later, John Michelle proposed the idea that there could exist a structure that would be massive enough and not even light would be able to escape its gravitational pull. In 1796, the famous French scientist Pierre-Simon Laplace made an important prediction about the nature of black holes. He suggested that because even the speed of light was slower than the escape velocity of black hole, the massive objects would be invisible. In 1915, Albert Einstein changed physics forever by publishing his theory of general relativity. In this theory, he explained space time curvature and gave a mathematical description of a black hole. And in 1964, john wheeler gave these objects the name, the black hole.

In classical physics, the mass of a black hole cannot decrease; it can either stay the same or get larger, because nothing can escape a black hole. If mass and energy are added to a black hole, then its radius and surface area also should get bigger. For a black hole, the radius is called the Schwarzschild radius. The second law of thermodynamics states that, an entropy of a closed system is always increases or remains the same. In 1974, Stephen hawking– an English theoretical physicists and cosmologist, proposed a groundbreaking theory regarding a special kind of radiation, which later became known as hawking radiation. So hawking postulated an analogous theorem for black holes called the second law of black hole mechanics that in any natural process, the surface area of the event horizon of a black hole always increase, or remains constant. It never decreases. In thermodynamics, black bodies doesn’t transmit or reflect any radiation, it only absorbs radiation.
When Stephen hawking saw these ideas, he found the idea of shining black holes to be preposterous. But when he applied the laws of quantum mechanics to general relativity, he found the opposite to be true. He realized that stuff can come out near the event horizon. In 1974, he published a paper where outlined a mechanism for this shine. This is based on the Heisenberg uncertainty Principe. According to the principle of quantum mechanisms, for every particle throughout the universe, there exists an antiparticle. These particles always exist in pairs, and continually pop in and out of existence everywhere in the universe. Typically, these particles don’t last long because as soon as possible and its antiparticle pop into existence, they annihilate each other and cease to exist almost immediately after their creation.

In the event horizon that the point which nothing can escape its gravity. If a virtual particle pair blip into existence very close to the event horizon of a black hole, one of the particles could fall into the black hole while the other escapes. The one that falls into the black hole effectively has negative energy, which is, in Layman’s terms, akin to subtracting energy from the black hole, or taking mass away from the black hole. The other particle of the pair that escapes the black hole has positive energy, and is referred to as hawking radiation. Due to the presence of hawking radiation, a black hole continues to loss mass and continues shrinking until the point where it loses all its mass and evaporates. It is not clearly established what an evaporating black hole would actually look like. The hawking radiation itself would contain highly energetic particles, antiparticles and gamma rays. Such radiation is invisible to the naked eye, so an evaporating black hole might not look like anything at all. It also possible that hawking radiation might power a hadronic fireball, which could degrade the radiation into gamma rays and particles of less extreme energy, which would make an evaporating black hoe visible. Scientists and cosmologists still don’t completely understand how quantum mechanics explains gravity, but hawking radiation continues to inspire research and provide clues into the nature of gravity and how it relates to other forces of nature.
The smallest thing that we can see with a light microscope is about 500 nanometers. A typical is anywhere from 0.1 to 0.5 nanometers in diameter. So we need an electron microscope to measure these atoms. The electron microscope was invented in 1931. Beams of electrons are focused on a sample. When they hit it, they are scattered, and this scattering is used to recreate an image. Then what about protons or neutrons? Or what about quarks? The quarks are the most fundamental building blocks of matter. So how did we find such small particles exist? The answer is a particle collider. A particle collider is a tool used to accelerate two beams of particles to collide since 1960s.

The largest machine built by man, the Large Hadron Collider (LHC) is a particle accelerator occupying an enormous circular tunnel of 27 kilometers in circumference, ranging from 165 to 575 feet below ground. It was situated near Genoa, Switzerland. It is so large that over the course of its circumference crosses the border between France and Switzerland. That’s the giant collaboration going on between over 100 countries and 10,000 scientists. The tunnel itself was constructed between 1983 and 1988 to house another particle accelerator, the Large Hadron Collider, which operated until 2000, its replacement, the LHC, was approved in 1995, and was finally switched on in September 2008.
Working of the Large Hadron Collider
The LHC is the most powerful particle accelerator ever built and has designed to explore the limits of what physicists refer to as the standard Model, which deals with fundamental sub-atomic particles. There are two vacuum pipes are installed inside the tunnel which intersects in some places and 1,232 main magnets are connected to the pipe. For proper operation, the collider magnets need to be cooled to -271.3 °C. To attain this temperature, 120 tons of liquid helium is poured into the LHC. These powerful magnets can accelerate protons near the speed of light, so they can complete a circuit in less than 90 millionths of a second. Two beams operate in opposite directions around the ring. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destructions being witnessed by super-sensitive instruments. But it’s not that easy to do this experiment. Each beam consists of bunches of protons and most of the protons just miss each other and carry on around the ring and do it again. Because, atoms are mostly empty space so getting them to collide is incredibly difficult. It like colliding a needle into a needle, provided that the distance between them is 10 kilometers.

The aim of these collisions is to produce countless new particles that stimulate, on a micro scale, some of the conditions postulated in the Big Bang at the birth of the universe. Higgs Boson was discovered with the help of LHC. This so called ‘God Particle’ that could be responsible for the very existence of mass. If it disappeared, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light, the exact value is 299,792,458 m/s. that mean we can reach our moon in 1.3 seconds from earth.
“When you look at a vacuum in a quantum theory of fields, it isn’t exactly nothing.” – Peter Higgs
In 1964 peter Higgs with five scientists proposed a theory called the Higgs mechanism to explain the existence of mass in the universe. Before 1930s, atoms were considered as the fundamental particles. Then we found electron, protons and neutrons as atomic particles. Later we found that protons and neutrons are made up of even more small fundamental particles called quarks. Quarks are the fundamental building blocks for the whole universe. The key evidence for the existence of these elementary particles came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford linear accelerator center. They are commonly found in protons and neutrons. There are six types of quarks, up quark, down quark, top quark, bottom quark, strange quark, charm quark. They can have positive (+) or negative (-) electric charge. Up, charm and top quarks have a positive 2/3 charge. Down, strange, bottom quarks have a negative 1/3 charge. So protons are positive because there are two quarks (+2/3) ups and one down quark (-1/3), giving a net positive charge (+2/3+2/3-1/3 =1). These three quarks are known as valence quarks, but the proton could have an additional up quark and anti-up quark pair.

The Higgs mechanism theory
In the second half of the 20th century, physicists made a developed a theory called a standard model of particle physics. They theorized about twelve fundamental particles that make up all matter, and four particles called bosons are responsible for three fundamental forces of nature. It includes strong force, weak force, and electromagnetism. Gravity is another force, it is not a part of this model but, it can be modeled using general relativity. With these fundamental particles in the standard model and gravity, we can build almost everything in the entire universe. However until 2012, the standard model was an underlying theory. Because all forces carrying particles should be massless. So, although the photons are massless, experiments show that the weak forces bosons have mass. So that was a promising model that could be used to explain our universe. But perhaps, it would need to be thrown out because it had the seemingly fatal flaw in being inconsistent regarding the way the weak force worked in the late 1950s physicists had no idea to resolve these issues all attempts to solve this problem. But indeed it created new theoretical problems. In 1964, Peter Higgs hypothesized that perhaps the force articles were massless but gained mass when they interacted with an energy field that is the reason for the existence of the entire universe.
During the very early moments following the big bang, in the universe, the elementary particles were massless and they were pure streams of energy that move at the speed of light. As the expansion of the universe was proceeding, density and temperature decreased below a certain key value. According to the theory, the Higgs field interacts with particles and can give them mass. It is theorized that different particles interact differently with the field, the particles that interact with it more intensely have greater mass and particles that don’t interact with it that much have lower mass. Just imagine Higgs field as water, pointed shape objects interact lesser with water and cube shaped objects interact more with it. Some particles don’t interact with the field like photons are massless. A fundamental part of the theory was the presence of a specific particle; it’s called the Higgs boson. A boson that would allow the Higgs mechanism to unfold correctly to give mass to all other particles.

CERN’s discovery of a new particle
Even though Higgs theorized it, scientists can’t able to prove that until 2012. The particle accelerators had to possess a huge amount of energy to detect them. Finally, the Large Hadron Collider (LHC), the CERN’s particle accelerator has been turned on in 2008 and managed to recreate the required energy and temperature conditions in 2012. The Higgs boson was finally experimentally detected and on 4th July, a conference held in the CERN auditorium announced the discovery of a particle compatible with the Higgs boson. The machine accelerates Hadron bundles at close to the speed of light and collides them each other in opposite directions. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destructions being witnessed by super-sensitive instruments. Even if LHC is the world’s largest particle accelerator, it had to work hard to detect Higgs boson. If the Higgs field doesn’t exist, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light. For This reason Higgs boson is often called as the ‘God particle’.
Our bodies contain many specialized cells that carry out specific functions. These specialized cells are called differentiated cells. Stem cells are cells with the potential to develop into many different types of cells in the body. They act as a repair system for the body. They are unspecialized cells, so they cannot do specific functions in the body. It can create the potential for the cells to be used to grow replacement tissues. American development biologist James Thomson (1958), from the University of Wisconsin School of medicine, won the race to isolate and human embryonic stem cells. On November 6 1998, the ‘journal science’ published the results of Thompson’s research. It described how he used embryos from fro fertility clinics which were donated by couples who no longer needed them, and developed ways to extract stem cells and keep them reproducing indefinitely.

With the ability to develop into any one of the 220 cell types in the body, stem cells hold great promise for treating a host of debilitating illness, including diabetes, leukemia, Parkinson’s disease, heart disease, and spinal cord injury. They also provide scientists with models of human disease and a new ways of testing drugs more effectively in living organisms. But for all the hopes invested, progress has been slow. It has helped that stem cell research has been steeped in controversy, with different groups questing the ethics of harvesting stem cells from human embryos.
In 2007 Thomson and Shinya Yamanaka, from Kyoto university, Japan, both independently found a way to turn ordinary human skin cells into stem cells. Both groups used four genes to reprogram human skin cells. Their work is being heralded as an opportunity to overcome problems including the shortage of human embryonic stem cells and restrictions on U.S. federal funding for research.
How stem cell therapy works?
Researches grow stem cells in lab. These developed stem cells are manipulated to specialize into specific types of cells, such as heart muscle cells, blood cells or nerve cells. These manipulated specialized cells can be implanted into the heart muscle. The healthy implanted heart muscle could then contribute to repairing defective heart muscle. The first stem cell therapy was a bone marrow transplant performed by French oncologist Georges Mathew in 1958 on five workers at the Vinca nuclear institute in Yugoslavia who had been affected by a criticality accident.
Stem cell therapies have become very popular in recent years, as people are seeking the latest alternative treatments for their many conditions. Stem cell therapies are very expensive to pursue. Even simple joint injections can cost $1,000 and more advancement treatments can rise in cost up to $100,000 depending on the condition. Patients must do their research and ask as many questions as they can before financially committing to treatment. Since it is a life changing treatment, it will effectively cost high.

Future stem cell treatments
The stem cell treatment can helps us curing various diseases in the future. But it is important not to overhype the potential of stem cells and to accurately communicate findings to the public. We must not allow the misleading of some people says that we can cure the untreatable diseases with stem cell treatments. However with more research and investment, I believe that stem cell therapy could transform disease outcomes of many patients.
“The regenerative medicine revolution is upon us. Like iron and steel to the industrial revolution, like the microchip to the tech revolution, stem cells will be the driving force of this next revolution.” -Cade Hildreth
It is difficult, to imagine a world without the motorcar. When German engineer Karl Benz drove a motorcar tricycle I 1885 and fellow Germans Gottlieb Daimler and Wilhelm Maybach converted a horse down carriage into a four wheeled motorcar in august 1886, none of them could have imagined the effects of their invention. Benz recognized the great potential of petrol as a fuel. His three wheeled car had a top speed of just ten miles (16 km) per hour with its four-stroke, one cylinder engine. After receiving his patent in January 1886, he began selling the Benz velo, but the public doubted its reliability. Benz’s wife bertha had a brilliant idea to advertise the new car. In 1886 she took it on a 60mile (100) trip from Mannheim to near Stuttgart. Despite having to push the car up hills, the success of the journey proved to a skeptical public that this was a reliable mode of transport.

Daimler and Maybach did not produce commercially feasible cars until 1889. Initially the German inventions did not meet with much demand, and it was French companies like Panhard at Levassor that redesigned and popularized the automobile. In 1926 Benz’s company merged to form the Daimler Benz company. Benz had left his company in 906 and, remarkably, he and Daimler never met. Due to higher incomes and cheaper, mass produced cars, the United States led in terms of motorization for much of the twentieth century. This kind of movement has, however, come at a cost. Some 25 million people are estimation to have died in car accidents worldwide during the twentieth century. Climate changing exhaust gases and suburban sprawl are but two more of the consequences of a heavy reliance on the automobile.
Invention of the clutch
Almost all historians agree that clutch was developed in Germany in the 1880s. Daimler met Maybach while they were working for Nikolaus Otto, the inventor of the internal combustion engine. In 1882 the two set up their own company, and from 1885 to 1886 they built a four-wheeled vehicle with a petrol engine and multiple gears. The gears were external, however, and engaged by winding belts over pulleys to drive each selected gear. In 1889, they developed a closed four- speed gearbox and a friction clutch to powers the gears, this car was the first to be marketed by the Daimler motor campy in 1890. Without a clutch, if the car engine is running the wheels keep turning. For the car to stop without stalling, the wheels and engine must be separated by a clutch. A friction clutch consists of a flywheel mounted to engine side. The clutch originates from the drive shaft and is a large metal plate covered with a frictional material. When the flywheel and clutch make contract, power is then transmitted to the wheels.

Gears in Motorcars
Karl Benz was the first to add a second gear to his machine and also invented the gear shift to transfer between the two. The suggestion for this additional gear came from Benz’s wife, bertha, who drove the three-wheeled Motorwagen 65 miles from Mannheim to Pforzheim – the first long distance automobile trip. The gears allow the engine to the maintained at its most efficient rpm while altering the relative speed of the drive shaft to the wheels. Gears originally required double clutching, where the clutch had to be depressed to disengage the first gear from the drive shaft, and then released to allow the correct rpm for the new gear to be selected. The clutch was then pressed again to engage the drives shaft with the new gear. Modern cars use synchronized which use friction to match the speeds of the new gear and he shaft before the teeth of the gears engage, meaning that the clutch only needs to be presses once.
“One thing I fell most passionately about: love of invention will never die” – Karl Benz
NFTs are currently taking the digital art and collectibles world by storm. Digital artists are seeing their lives change thanks to huge sales to a new crypto-audience. And celebrities are joining in as they spot a new opportunity to connect with fans. But digital art is only one way to use NFTs. Really they can be used to represent ownership of any unique asset, like a deed for an item in the digital or physical realm.
NFTs are tokens that we can use to represent ownership of unique items. They let us tokenise things like art, collectibles, even real estate. They can only have one official owner at a time and they’re secured by the Ethereum blockchain – no one can modify the record of ownership or copy/paste a new NFT into existence.
NFT stands for non-fungible token. Non-fungible is an economic term that you could use to describe things like your furniture, a song file, or your computer. These things are not interchangeable for other items because they have unique properties.
Fungible items, on the other hand, can be exchanged because their value defines them rather than their unique properties.
NFTs and Ethereum solve some of the problems that exist in the internet today. As everything becomes more digital, there’s a need to replicate the properties of physical items like scarcity, uniqueness, and proof of ownership. Not to mention that digital items often only work in the context of their product. For example you can’t re-sell an iTunes mp3 you’ve purchased, or you can’t exchange one company’s loyalty points for another platform’s credit even if there’s a market for it.
Here’s how an internet of NFTs compared to the internet most of us use today looks…
| An NFT internet | The internet today |
|---|---|
| NFTs are digitally unique, no two NFTs are the same. | A copy of a file, like an .mp3 or .jpg, is the same as the original. |
| Every NFT must have an owner and this is of public record and easy for anyone to verify. | Ownership records of digital items are stored on servers controlled by institutions – you must take their word for it. |
| NFTs are compatible with anything built using Ethereum. An NFT ticket for an event can be traded on every Ethereum marketplace, for an entirely different NFT. You could trade a piece of art for a ticket! | Companies with digital items must build their own infrastructure. For example an app that issues digital tickets for events would have to build their own ticket exchange. |
| Content creators can sell their work anywhere and can access a global market. | Creators rely on the infrastructure and distribution of the platforms they use. These are often subject to terms of use and geographical restrictions. |
| Creators can retain ownership rights over their own work, and claim resale royalties directly. | Platforms, such as music streaming services, retain the majority of profits from sales. |
| Items can be used in surprising ways. For example, you can use digital artwork as collateral in a decentralised loan. |
The NFT world is relatively new. In theory, the scope for NFTs is anything that is unique that needs provable ownership. Here are some examples of NFTs that exist today, to help you get the idea:
We use NFTs to give back to our contributors and we’ve even got our own NFT domain name.
If you contribute to ethereum.org, you can claim a POAP NFT. These are collectibles that prove you participated in an event. Some crypto meetups have used POAPs as a form of ticket to their events. More on contributing.

This website has an alternative domain name powered by NFTs, ethereum.eth. Our .org address is centrally managed by a domain name system (DNS) provider, whereas ethereum.eth is registered on Ethereum via the Ethereum Name Service (ENS). And its owned and managed by us.
NFTs are different from ERC-20 tokens, such as DAI or LINK, in that each individual token is completely unique and is not divisible. NFTs give the ability to assign or claim ownership of any unique piece of digital data, trackable by using Ethereum’s blockchain as a public ledger. An NFT is minted from digital objects as a representation of digital or non-digital assets. For example, an NFT could represent:
An NFT can only have one owner at a time. Ownership is managed through the uniqueID and metadata that no other token can replicate. NFTs are minted through smart contracts that assign ownership and manage the transferability of the NFT’s. When someone creates or mints an NFT, they execute code stored in smart contracts that conform to different standards, such as ERC-721. This information is added to the blockchain where the NFT is being managed. The minting process, from a high level, has the following steps that it goes through:
NFT’s have some special properties:
In other words, if you own an NFT:
And if you create an NFT:
The creator of an NFT gets to decide the scarcity of their asset.
For example, consider a ticket to a sporting event. Just as an organizer of an event can choose how many tickets to sell, the creator of an NFT can decide how many replicas exist. Sometimes these are exact replicas, such as 5000 General Admission tickets. Sometimes several are minted that are very similar, but each slightly different, such as a ticket with an assigned seat. In another case, the creator may want to create an NFT where only one is minted as a special rare collectible.
In these cases, each NFT would still have a unique identifier (like a bar code on a traditional “ticket”), with only one owner. The intended scarcity of the NFT matters, and is up to the creator. A creator may intend to make each NFT completely unique to create scarcity, or have reasons to produce several thousand replicas. Remember, this information is all public.
Some NFTs will automatically pay out royalties to their creators when they’re sold. This is still a developing concept but it’s one of the most powerful. Original owners of EulerBeats Originals earn an 8% royalty every time the NFT is sold on. And some platforms, like Foundation and Zora, support royalties for their artists.
This is completely automatic so creators can just sit back and earn royalties as their work is sold from person to person. At the moment, figuring out royalties is very manual and lacks accuracy – a lot of creators don’t get paid what they deserve. If your NFT has a royalty programmed into it, you’ll never miss out.
Here’s more information of some of the better developed use-cases and visions for NFTs on Ethereum.
The biggest use of NFTs today is in the digital content realm. That’s because that industry today is broken. Content creators see their profits and earning potential swallowed by platforms.
An artist publishing work on a social network makes money for the platform who sell ads to the artists followers. They get exposure in return, but exposure doesn’t pay the bills.
NFTs power a new creator economy where creators don’t hand ownership of their content over to the platforms they use to publicise it. Ownership is baked into the content itself.
When they sell their content, funds go directly to them. If the new owner then sells the NFT, the original creator can even automatically receive royalties. This is guaranteed every time it’s sold because the creator’s address is part of the token’s metadata – metadata which can’t be modified.
The Hubble space telescope is the most famous telescope in the world. It was named after the famous astronomer Edwin Hubble who changed our understanding of the universe proving the existence of other galaxies. It is an automatic observatory, has discovered millions of new objects in space. It helped us to witness the birth of new stars, found planets outside the solar system and see super massive black holes. Hubble was launched in 199o, and from December 1993 to may 2009, the telescope was repaired and updated four times. Astronauts visited HST five times in order to make repairs and new instruments.

Hubble holds the record for the longest range of observation. The light from the most distant galaxies has taken billions of years to travel across the universe and reach Hubble. By taking this picture, Hubble was literally looking back in time to the very early universe. You can notice on the right side of the image, there is a galaxy very much like the Milky Way that galaxy is about five billion years away, so we are looking back in time by five billion years. In March 4th, 2016, NASA releases a historic image, one that many believed was impossible. It captured the farthest away of all known galaxies; it’s located about 13.4 billion light years away from us. The light from his galaxy has just reached the earth crossing the distance that separates us; hat is now we can observe it as it was 400 million years after the big bang. This galaxy is 25 times smaller than our galaxy, the Milky Way. It helped to find the age for the universe now known to be 13.8 billion years, roughly three times the age of earth.
With the advanced camera of the NASA’s Hubble space telescope, it discovered a new planet called Fomalhaut b which orbiting is parent star Fomalhaut. Fomalhaut is 2.3 times heavier and 6 times larger than the sun around it is a disc of cosmic dust which creates the resemblance of an ominous eye. Fomalhaut b lies 1.8 billion miles inside the ring’s inner edge and orbits 10.7 billion miles from its star. Astronomers have calculated that Fomalhaut b completes an orbit around its parent star every 872 years. The Fomalhaut system is 25 light years away in the constellation Piscis Australis. But in April 2020, astronomers began doubting its existence; the planet is missing in the new Hubble pictures. Scientists believe that this planet was a cloud of dust and debris formed as a result of a collision of two icy celestial bodies.

In 1994, Hubble captured the most detailed image of the iconic feature called the pillars of creation. The pillars of creation are fascinating but relatively small feature of the entire eagle nebula. The blue color in the image represent oxygen, red is sulfur, and green represents both nitrogen and hydrogen. The nebula was discovered in 1745 by the Swiss astronomer jean Philippe Loys de cheseaux, is located 7,000 light years from earth in the constellation serpens. During its work Hubble has presented millions of images but unfortunately NASA has suspended missions to repair and modernize the telescope. It is assumed that in 2021, Hubble will be replaced with the new James Webb space telescope.
Most of us might have had this idea, that magnets attract each other in opposite poles, so why can’t we use this to create free energy. Like placing a magnet or a metal in a car and attach the other magnet with a rod or something and place it in front of the car that keeps them attract each other. With this idea, we can move the car without any energy, forever. A perpetual motion machine is a device that is supposed to work indefinitely without any external energy source. Imagine a windmill that produced the breeze it needed to keep rotating or a light bulb whose glow provided its own electricity. These devices have captured many inventers’ imaginations because they could transform our relationship with energy. It sounds cool right? But there is only one problem, it won’t work.

In countless instances in history, people have claimed that they have made a perpetual motion machine. Around 1159 A.D. a mathematician called Bhaskara the learned sketched a design for a wheel containing curved reservoirs of mercury. He reasoned that as the wheels spun, the mercury would flow to the bottom of each reservoir, leaving one side of the wheel perpetually heavier than the other. The imbalance would keep the wheel turning forever. Bhaskara’s drawing was one of the earliest designs for a perpetual motion machine. And more people have claimed that they made a perpetual motion machine, like Zimara’s self blowing windmill in the1500s, the capillary bowl where capillary action forces the water upwards, the oxford electric bell, which takes back and forth due to charge repulsion, and so on. In fact the US patent office stopped granting patents for perpetual motion machines without a working prototype.
Why perpetual motion machines won’t work?
Ideas of perpetual motion machine all violate one or more fundamental laws of thermodynamics. These laws describe the relationship between different forms of energy. The first law of thermodynamics says that “Energy neither be created nor be destroyed”. You can’t get out more energy than you put in. that rules out a useful perpetual motion machine right away because a machine could only ever produce as much as it consumed. There wouldn’t be any leftover energy to power a car or charge a phone. But what if you just wanted the machine to keep itself moving? Let’s take the Bhaskara’s wheel, the moving parts that make one side of the wheel heavier also shift its center of mass downward below the axle. With a low center of mass, the wheel just swings back and forth like a pendulum and will stop. In the 17th century, Robert Boyle came up with an idea for a self watering pot. He theorized that capillary action, the attraction between liquids and surfaces that pulls water through thin tubes, might keep the water cycling around the bowl. But if the capillary action is strong enough to overcome gravity and draw the water up, it would also prevent it from falling back into the bowl.

For each of these machines to keep moving, they had to create some extra energy to nudge the system past its stopping point, breaking the first law of thermodynamics. There are ones that seems to keep moving, but in reality, they invariably turn out to be drawing energy from some external source. Even if engineers could design a machine that didn’t violate the first law of thermodynamics, it still wouldn’t work in the real world because of the second law. The second law of thermodynamics tells us that energy tends to spread out through processes like friction, heating. Any real machine would have moving parts or interactions with air or liquid molecules that would generate tiny amount of friction and heat, even in a vacuum. That heat is energy escaping, and it would keep leeching out, reducing the energy available to move the system itself until the machine inevitably stopped. Like I said about the idea of a car with magnets, the magnets in it won’t able to move the car. Even if the magnet is so powerful to move the car, the friction came into action and will eventually stops the car. So these two laws of thermodynamics will destroy every idea for perpetual motion. With these, we can conclude that perpetual motion machines are impossible.
YOU CAN’T GET SOMETHING FOR NOTHING.
In the early models of the atom were simple, with protons and neutrons forming a nucleus and negatively charged electrons orbiting it, it seemed like a tiny solar system. In the early 1930s, however, analysis of cosmic rays and experiments with particle acceleration showed the existence of new particles by the dozen. In the early of 1960s American physicist Murray Gell-Mann and George Zweig independently conjectured that protons and neutrons were made of even more fundamental particles. They named the subatomic particles as Quark in 1964. The word quark came from James Joyce’s novel “Finnegan’s Wake” in which it is a nonsense word made by Joyce. He key evidence for their existence came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford linear accelerator center. Other theoretical and experimental advances of the 1970s confirmed this discovery, leading to the standard model of elementary particle physics currently in force.

Properties of Quarks
Quarks are most commonly found inside protons and neutrons. They have many properties including mass, electric, charge, and color. There are six types of quarks, up quark, down quark, top quark, bottom quark, strange quark, charm quark. They can have positive (+) or negative (-) electric charge. Up, charm and top quarks have a positive 2/3 charge. Down, strange, bottom quarks have a negative 1/3 charge. So protons are positive because there are two quarks (+2/3) ups and one down quark (-1/3), giving a net positive charge (+2/3+2/3-1/3 =1). These three quarks are known as valence quarks, but the proton could have an additional up quark and anti-up quark pair.
An anti-quark is the anti-particle of a quark and it could have other types of quarks. It includes pairs of strange quarks and anti-strange quarks, charm quarks, and anti-charm quarks. In fact, the proton has tons of quarks, anti-quarks pairs. The quarks are held together by the strong force which is carried by particles called gluons. So inside the proton, there are zillions of gluons and quarks all moving around close to the speed of light. The quarks that comprise a proton only make of 1% of the mass of that proton. A neutron consist two down quarks and one up quark which gave it an overall charge of 0. The quarks have a property called color change. It includes three color, red, blue, green and each of them is complemented with an anti-color. When we mix these three colors, we get white, that’s why proton is called colorless. The quarks change their colors constantly but, In order to maintain colorless state, the ant-color mix into it.The interaction between quarks and gluons is responsible for almost all the perceived mass of protons and neutrons and is therefore where we get our mass.

Conclusion
The discovery of quarks was a gradual process that took over a decade for the entire sequence of events to unfold. A variety of theoretical insights and experimental results contributed to this discovery, but the MIT-SLAC deep inelastic electron scattering experiments plays a vital role. The existence of quarks is recognized today as a cornerstone of the standard model. I numerous experiments at CERN including those at the Large Hadron Collider (LHC), physicists are measuring the properties of Gell-Mann and Zweig’s particles with ever-greater precision.
“Three quarks for muster mark!” – Author James Joyce

With the government encouraging make in India campaigns and promoting atmanirbhar Bharat.
The budget for the year is incorporative of several schemes and initiatives that will help businesses and ecommerce websites operate more easily. schemes such as “Ease of Doing Business 2.0” and “One nation, one registration” programs and “Digital Ecosystem for Skilling and Livelihood” (DESH-Stack) portal which will aid business operations with measures such as:
• Digitization for improved transparency and ease of use
• Interlinking of the ASEEM (Atmanirbhar Skilled Employee-Employer Mapping) Udyam, e-Shram and National Career Service (NCS) portals to create connected databases and improve efficiency
The Amazon effect refers to the impact created by the online, eCommerce, or digital marketplace on the traditional brick and mortar business model that is the result of the change in shopping patterns, customer expectations, and the industry’s competitive landscape. As online shopping and eCommerce grow in popularity, it has hurt many traditional businesses that are forced to compete with the online marketplace with only a physical location.
Counter
Amazon has enabled small businesses to reach millions of customers across India by providing an ecosystem for them to use. They have been empowered to offer a superlative customer experience while helping these local sellers increase their product exposure, expert endorsements along with product reviews. It has also created a space that can foster these sellers’ unique products and services. .he world.”
Amazon is virtually in every industry from food delivery, content streaming, e commerce. It is truly an unrivalled monopoly. Amazon has had several fantastic technological innovations in the last decade.
Fire tv
Fire TV is Amazon’s response to similar products from competitors, such as Apple TV and Roku. The product is gobbling up market share quickly. By mid-2019 it had more than 34 million active users.4 The streaming industry measures market share separately for the box and the stick. In the U.S., Amazon’s Fire TV box had a 28.5% share of the market but 57% of the market for sticks.4Its versatility has garnered rave reviews from industry analysts: A Fire TV box streams live TV and allows users to watch hundreds of queued shows and movies. It is also a popular and well-received gaming device.
Amazon Alexa
While the voice-assisted technology isn’t entirely there yet, Amazon’s voice-responding virtual assistant is helping to propel it forward. With the Amazon Echo, Tap, and Dot, Amazon is getting people accustomed to using this technology—and trying to grab their share of the market (versus competitors like Google and Apple). Amazon’s voice assistant, Alexa, is amazing at doing Amazon things (placing orders, finding music, etc.). And she can hear and respond to voice commands at a normal volume from across a noisy room, which is pretty impressive.
the social impact of e-commerce can be measured by satisfaction and trust.
Social media playing a key role in the dynamic in the website consumer relationship. Eith the nuber of people using the internet increasing day by day. the effect social media has particularly on the millennial and gen z is astonishing. Various social media platforms have a strong impact on lifestyles of people today. Facebook, Quicker, Snapdeal, Amazon, Pinterest, and Instagram greatly affect consumers in their online shopping habits with SEO and SMO, and strategies like pay per click advertising whereby they reach the target audience effectively.
Compared to 39.1% of Millennials, 64.2 percent of Gen Z said they draw purchasing inspiration from Instagram. Ethics, sustainability, and equality are equally important to these younger generations. For example, 41% of Gen Z indicated they’d pay more for sustainable apparel, while 73.9 percent of Millennials believe it’s extremely or somewhat essential for firms to demonstrate their support for diversity and equality.
Social media is also a big influence on their purchase decisions. In particular, 64.2% of Gen Z noted that they get shopping inspiration from Instagram, compared to 39.1% of Millennials. These younger generations also care a great deal about ethics, sustainability, and equality. For example, 41% of Gen Z said they’d pay more for sustainable fashion, while 73.9% of Millennials think it’s very or fairly important that brands show that they are pro-diversity and pro-equality
Recycling
Amazon is committed to reducing our environmental footprint through recycling initiatives in our own operations and partnerships that support the development of recycling infrastructure across the industry.
Carbon footprint
Amazon’s corporate carbon footprint quantifies the total greenhouse gas emissions attributed to our direct and indirect operational activities. We measure our total impact on the climate, map the largest activities contributing to this impact, and develop meaningful carbon reduction strategies to reach net-zero carbon emissions across our business by 2040.
Renewable Energy
As part of our goal to reach net-zero carbon by 2040, Amazon is on a path to powering our operations with 100% renewable energy by 2025—five years ahead of our original target of 2030. In 2020, we became the world’s largest corporate purchaser of renewable energy, reaching 65% renewable energy across our business.
The Climate Pledge
Amazon is committed to building a sustainable business for our customers and the planet. In 2019, Amazon co-founded The Climate Pledge—a commitment to be net-zero carbon across our business by 2040, 10 years ahead of the Paris Agreement.
In June 2021 by the Department of Consumer Affairs came up with the Consumer Protection (E-Commerce) (Amendment) Rules, 2021, placing some stringent rules on ecommerce websites. Amazon as an ecommerce
Some of the rules are
Use of marketplace entity’s name or brand for advertising or sale of products or services: Marketplaces may not use their name or brand to represent that their offers are from the marketplace itself.
Misuse of a dominating market position: An e-commerce business may not abuse its dominant market position in any market. fallback liability: In the event that a seller on a marketplace platform fails to provide goods or services, resulting in a loss to the customer, the marketplace will be held liable.
Advertising that are deceptive: An e-commerce platform should not allow misleading advertisements. Falsely portraying a product or service, (ii) falsely promising or misleading regarding nature are all examples of deceptive advertising under the Act.
PepsiCo is the largest selling beverage the world over, of course after its arch rival Coca Cola. It accounts for a 37% share of the global beverage market, and therefore they need to understand each and every country’s market in order to stay in line with their PESTLE situations. Pepsico is a big brand, currently holds the 23rd place in the Interbrand report of the World’s Leading Brands. Their advertisements feature major celebrities and athletes like David Beckham, Robbie Williams, Britney Spears, Michael Jackson, Kendall Jenner etc.
Their market reach is also very diverse, as they’re present in almost every country from the US to New Zealand. A probable PESTLE analysis for them is given below:
Major economies like the United States and Canada are politically stable but in many parts of the world civil unrest in certain markets results in sales dip, product seizure, disrupted supply chain, product damage and hence losses. Most importantly, cross border situations are starkly different therefore Pepsi has to stay in line with all those policies and changes so that they can adapt to all those changes accordingly.
Besides, US government initiatives against sweetened carbonated drinks are a threat that could reduce PepsiCo’s revenues in the upcoming future. Due to new introduction of an American tax called the soda tax, the price of soda rose 3 cents per ounce when adopted by Philadelphia. Although this soda tax originated in 2015 but since Philadelphia’s adoption, Oakland, Seattle (Washington), San Francisco, and Boulder Colorado have also integrated this change. The government is trying to make a point- sugar and obesity is the biggest threat to American youth health today and a stop has to be put to it.
As the recent economic downturn has plagued the economy, companies had to restructure their sales and marketing campaigns greatly- so they will have to rethink budget. Also, if profits diminish they may have to undergo downsizing internally and re-think upon how to increase the sales. Economic conditions have the highest influence on a business, regardless of what trade it is in.
Social:
Social factors greatly impact Pepsico, as it’s a non-alcoholic beverage it has to remain in line with the strict and stark differences of cultures the world over. Also, Pepsi has to communicate its image as a global brand so that the people can associate it with themselves as something that connects the world together. People are avoiding sweetened aerated beverages (an average can has 40 sachets of sugar) and obesity is becoming a concern- they have to address these concerns.
Pepsico is running various CSR projects globally for food, water and children well-being.
With the advent of the new age in technology, companies have completely integrated themselves with all the recent changes that have taken place. To mention a recent trend that has greatly picked up and something that almost every business is turning toward is Social Media for advertising. The social media explosion has allowed for increasingly interactive engagement with the consumers with real time results so Pepsi has to stay ahead of all the developments that take place with keeping in view how the youth of today utilizes technology for their benefit and how can Pepsi reach them in order to keep on increasing brand recall and brand engagement.
E-commerce delivery can also be looked at. Factory automation is another developmental focus area and technology upgradation could help production.
There can be many legal implications upon the beverage industry.
Pepsi is a non-alcoholic beverage and is therefore regulated by the FDA. So, they’re supposed to maintain a firm standard of the laws set out by the FDA with consistency. Also, many different markets across the world have different set of regulations that are either relaxed or are either stringent.
In the early decade of this century, Pepsi was accused of using contaminated water in Indian market, given a lab test that was done upon the water flowing into the Pepsi factory that was located nearby an industrial estate. A massive recall was issued for the products from shelves and then the product was tested costing the company many billions of dollars upon the tests as India is a very major market. Pesticide charges were another legal controversy.
Plastic is adding to environmental strain and so bottling /packaging will have to be thought again.
Over utilization of water resources for manufacturing has created a concern today.
During the World War II the united sates used an unprecedented $2 billion to feed an ultra-secret research and development program, the outcome of which would alter the relationships of nations forever. Known as the Manhattan project, it was the search by the United States and her closest allies to create a practical atomic bomb. It is a single device which capable of mass destruction, the threat of which alone could be powerful enough to end the war. The motivation was simple. Scientists escaping the Nazi regime had revealed that research in Germany had confirmed the theoretical viability of atomic bombs. In 1939, in support of their fears that the Nazis might now be developing such a weapon, Albert Einstein and others wrote to President Franklin D. Roosevelt (FDR) warning of the need for atomic research. By 1941 FDR had authorized formal, coordinated scientific research into such a device. Among those efforts would ultimately unleash the power of the atom was Robert Oppenheimer, who was appointed the project’s scientific director in 1942. Under his direction the famous laboratories at Los Alamos would be constructed and the scientific team assembled. On July 16 1945, in a small town called Alamogordo, New Mexico, the course of human history was changed; the first atomic bomb was detonated that day.

Principle of an atomic bomb
An atom bomb works in the principle that when you break up a nucleus of an atom, a large amount of energy is released. Because it takes a large amount of energy to keep the nucleus bound together. When you split it apart, the energy is released. Scientists chose the biggest and heaviest nucleus that is found in nature to be the best object for splitting. It is uranium, it is unique in that one of its isotopes is the only naturally occurring element on that is capable of sustaining a nuclear fission reaction. A uranium atom has 92 protons and 146 neurons together to give an atomic mass of 238 or U238. A very small portion of uranium, when it is mine, is in the form of an isotope U235, this isotope has the same 92 protons but only 143 neutrons, or three fewer than U238. U235 is highly unstable, which makes it highly fissionable. When uranium U235 is slammed by a neutron, it becomes uranium 236. In the process of splitting and creating two more stable atoms, a whole bunch of energy is released, along with three more neutrons. These three more neutrons fly out and slam more U235 atoms. And thus, a chain reaction occurs, causing more and more U235 to be split, and ultimately causes a huge explosion. The uranium contains only 0.7% of this U235 isotope, and a whole bunch of it is needed to make one atomic bomb.

Another engineering challenge is to create a vessel with the correct shape and material to contain the neutrons after fissioning, so that they do not escape, but rather cause more atoms to fission. And it is lined with a special mirror so that it forces neutrons back in to the fissionable material rather than escape the vessel. Then the correct amount of fissionable material has to be placed inside this vessel. This is called ‘super critical mass’. There has to be enough mass to sustain an uncontrollable chain reaction resulting in an explosion. The super critical mass has to be kept apart until you are ready for an explosion. Otherwise an explosion can occur when you don’t want it. The reason is because these isotopes are unstable, and are throwing off neurons randomly. In an atomic bomb, two subcritical masses are slammed together usually with a conventional bomb contained inside the outer bomb. This conventional explosive charge initiates the chain reaction. This project ultimately created the first, man-made nuclear explosion, which Robert Oppenheimer called “trinity” on July 16, 1945. The concept of an atom bomb is simple but, the process of actually creating a bomb is not so simple.
“Now I am became Death, the destroyer of worlds.” – J. Robert Oppenheimer

Beginning with the first agricultural settlements, people have been utilising biological processes to enhance their quality of life for over 10,000 years. Humans began to use microbes’ biological processes to manufacture bread, alcoholic drinks, and cheese, as well as to preserve dairy goods, some 6,000 years ago. However, such processes are not included in today’s definition of biotechnology, which was coined to describe the molecular and cellular technologies that emerged in the 1960s and 1970s. In the mid- to late 1970s, a nascent “biotech” sector emerged, led by Genentech, a pharmaceutical firm founded in 1976 by Robert A. Swanson and Herbert W. Boyer to commercialise Boyer, Paul Berg, and Stanley N. Cohen’s recombinant DNA technology. Genentech, Amgen, Biogen, Cetus, and Genex were among the first businesses to produce genetically altered molecules for medicinal and environmental purposes.
Recombinant DNA technology, often known as genetic engineering, dominated the biotechnology sector for more than a decade. Splicing the gene for a useful protein (typically a human protein) into production cells—such as yeast, bacteria, or mammalian cells in culture—causes the protein to start producing in large quantities. When splicing a cable, there are a few things to keep in mind. . A new creature is produced when a gene is spliced into a producing cell. Biotechnology investors and researchers were first unsure if the courts would enable them to get patents on organisms; after all, patents were not permitted on newly found and recognised creatures in nature. However, in the case of Diamond v. Chakrabarty, the United States Supreme Court decided in 1980 that “a living human-made microbe is patentable subject matter.” This decision resulted in the formation of a slew of new biotechnology companies as well as the industry’s first investment boom. Recombinant insulin was the first genetically engineered product to be approved by the US Food and Drug Administration in 1982. . Since then, hundreds of genetically modified protein therapies, such as recombinant growth hormone, clotting factors, proteins that stimulate the creation of red and white blood cells, interferons, and clot-dissolving agents, have been sold across the world.
In a laboratory, a researcher purifies molecules for the manufacture of therapeutic proteins from biological material.
Alamy/Uwe Moser
Methodologies and tools
The capacity to create naturally occurring therapeutic compounds in bigger amounts than could be obtained from conventional sources such as plasma, animal organs, and human cadavers was the primary success of biotechnology in the early years. Pathogens are less likely to infect recombinant proteins, and allergic responses are less common. Biotechnology experts are now working to identify the underlying biological causes of disease and intervene precisely at that level. As with the first generation of biotech drugs, this might imply creating therapeutic proteins to supplement the body’s own resources or compensate for hereditary inadequacies. (A related procedure is gene therapy, which involves inserting genes encoding a required protein into a patient’s body or cells.)
The biotechnology sector has also increased its research into conventional medications and monoclonal antibodies that can halt disease progression. One of the most important biotechnology approaches to emerge in the final part of the twentieth century was the successful manufacture of monoclonal antibodies. Because of the specificity of monoclonal antibodies and their widespread availability, sensitive tests for a wide range of physiologically essential chemicals have been developed, as well as the capacity to differentiate cells by recognising hitherto identified marker molecules on their surfaces. The study of genes (genomics), the proteins that they encode (proteomics), and the wider biological pathways in which they function allowed for such advancements.
Biotechnology offers a wide range of uses, including medicine and agriculture. Biotechnology could be used to merge biological information with computer technology (bioinformatics), or it could be used to investigate the use of microscopic equipment that can enter the human body (nanotechnology), or it could be used to replace dead or defective cells and tissues using stem cell research and cloning techniques. Biotechnology has been useful in refining industrial processes through the discovery and production of biological enzymes that spark chemical reactions (catalysts); in environmental cleanup with enzymes that digest contaminants into harmless chemicals and then die after consuming the available “food supply”; and in agricultural production through genetic engineering. Biotechnology’s agricultural uses have been the most contentious. Some environmentalists and consumer groups have proposed GMO bans or labelling regulations to alert people to the rising prevalence of GMOs in the food chain. GMOs were first introduced into agriculture in the United States in 1993, when the FDA authorised bovine somatotropin (BST), a growth hormone that increases milk output in dairy cows. The FDA authorised the first genetically modified whole product the following year, a tomato with a longer shelf life. Since then, dozens of agricultural GMOs have received regulatory clearance in the United States, Europe, and abroad, including crops that make their own insecticides and crops that resist the application of certain herbicides.
creatures that have been genetically modified
Scientific approaches, such as recombinant DNA technology, are used to create genetically engineered species.
Encyclopaedia Britannica, Inc. is a company that publishes encyclopaedias.
GMO foods have been found to be safe by studies conducted by the United Nations, the National Academy of Sciences of the United States, the European Union, the American Medical Association, US regulatory agencies, and other organisations, but sceptics argue that it is still too early to judge the long-term health and ecological effects of such crops. The land area planted in genetically modified crops expanded substantially in the late twentieth and early twenty-first centuries, from 1.7 million hectares (4.2 million acres) in 1996 to 180 million hectares (445 million acres) in 2014. Approximately 90% of maize, cotton, and soybeans cultivated in the United States were genetically modified by 2014–15. The Americas were home to the bulk of genetically modified crops.
Over the five-year period from 1996 to 2000, the revenues of the biotechnology sectors in the United States and Europe almost quadrupled. The development of new products, notably in health care, spurred rapid expansion far into the twenty-first century. The worldwide biotechnology market is expected to be worth $752.88 billion by 2020, with significant growth potential arising in particular from government and industry-led efforts to speed up medication research and product clearance procedures.

Nanotechnology is a phrase used to describe fields of science and engineering in which phenomena occurring at nanoscale dimensions are used in the design, characterization, manufacture, and application of materials, structures, devices, and systems. Although there are many examples of structures with nanometer dimensions (hereafter referred to as the nanoscale) in the natural world, such as essential molecules in the human body and food components, and although many technologies have inadvertently involved nanoscale structures for many years, it has only been in the last quarter of a century that it has been possible to actively and intentionally modify molecules and structures within this size range. Nanotechnology is distinguished from other fields of technology by its ability to manipulate things at the nanometer scale.
Clearly, nanotechnology in its different manifestations has the potential to have a huge influence on society. In general, it is reasonable to expect that the deployment of nanotechnology will benefit both individuals and organizations. Many of these applications include novel materials that act at the nanoscale, where new phenomena are connected with the extremely large surface area to volume ratios observed at these dimensions, as well as quantum effects that are not seen at larger scales. . Materials in the form of ultra-thin films for catalysis and electronics, two-dimensional nanotubes and nanowires for optical and magnetic systems, and nanoparticles for cosmetics, medicines, and coatings are all examples. The information and communications sector, which includes electronic and optoelectronic fields, food technology, energy technology, and the medical products sector, which includes many different aspects of pharmaceuticals and drug delivery systems, diagnostics, and medical technology, where the terms nanomedicine and bio nanotechnology are already commonplace, are the industrial sectors that are most readily embracing nanotechnology. Nanotechnology goods may potentially present fresh challenges for environmental pollution mitigation. However, just as phenomena occurring at the nanoscale may be quite different from those occurring at larger dimensions and may be exploitable for the benefit of mankind, these newly identified processes and their products may expose the same humans, as well as the environment in general, to new health risks, potentially involving quite different mechanisms of interference with human and environmental species’ physiology. These possibilities might be focused on the destiny of free nanoparticles produced in nanotechnology processes and discharged into the environment, either purposefully or accidently, or supplied directly to persons through the operation of a nanotechnology-based product.
Individuals whose jobs expose them to free nanoparticles on a regular basis should be particularly concerned. The fact that evolution has determined that the human species has developed mechanisms of protection against environmental agents, both living and dead, is central to these health risk concerns. This process is determined by the nature of the agents commonly encountered, with size being a key factor. Exposure to nanoparticles with previously unknown properties may pose a threat to the body’s usual defense mechanisms, such as the immunological and inflammatory systems. It’s also likely that nanotechnology goods will have an environmental impact due to processes of dispersion and persistence of nanoparticles in the environment. Wherever the possibility for a completely new risk is discovered, a detailed examination of the risk’s nature is required, which may subsequently be utilized in risk management processes if necessary. It is commonly acknowledged that the hazards related with nanotechnology should be investigated in this manner. Many international organisations (e.g. Asia Pacific Nanotechnology Forum 2005), European Union governmental bodies (European Commission 2004,), National Institutions, non-governmental organizations (e.g. UN-NGLS 2005), learned institutions and societies, and individuals (e.g. Oberdörster et al 2005, Donaldson and Stone 2003) have published reports on the current state of nanotechnology. The European Council has emphasized the importance of paying close attention to potential risks throughout the life cycle of nanotechnology-based products, and the European Commission has expressed its desire to work on an international level to establish a framework of shared principles for the safe, sustainable, responsible, and socially acceptable use of nanotechnologies.
There are numerous definitions of nanotechnology and nanotechnology products, which are frequently developed for specific reasons. The fundamental scientific principles of nanotechnology have been deemed more significant than the semantics of a definition in this Opinion, thus they are addressed first. The Committee believes that the UK Royal Society and Royal Academy of Engineering’s definition of nanoscience and nanotechnology in their 2004 report (Royal Society and Royal Academy of Engineering 2004) effectively communicates these notions. This implies that the nanoscale extends from the atomic level (about 0.2 nm) to roughly 100 nm. . Because of the significantly increased ratio of surface area to mass, and also because quantum effects begin to play a role at these dimensions, leading to significant changes in several types of physical property, materials in this range can have significantly different properties than the same substances at larger sizes.
The words used in this Opinion are defined in accordance with the British Standards Institution’s recently released Publicly Available Specification on the Vocabulary for Nanoparticles (BSI 2005), which proposes the following meanings for the key generic terms:
Nanoscale refers to objects with one or more dimensions of 100 nanometers or less. Nanoscience is the study of phenomena and material manipulation at the atomic, molecular, and macromolecular sizes, where characteristics differ dramatically from those at higher scales.
A nanocomposite is a composite in which at least one of the components has a nanoscale dimension. It’s worth noting that nanoscience and nanotechnology have exploded in popularity in recent years, and the terminology used by the respective fields hasn’t always been consistent. Furthermore, as this report points out, there have been and continue to be significant challenges in precisely measuring nanoscale parameters, making it difficult to have complete confidence in data and conclusions drawn about specific phenomena relating to specific features of nanostructures and nanomaterials. This Opinion recognises the inevitability of the situation and has derived some broad conclusions despite the fact that the literature may include contradictions and errors. While this Opinion adheres to the notion that nanoscale presently has dimensions of up to 100 nm, it recognises that certain publications may have depicted nanoscale as having bigger dimensions than 100 nm. Much of the research on particles, particularly that on aerosols, air pollution, and inhalation toxicity, has classified particles as ultrafine, fine, or conventional. Unless otherwise noted, ‘ultrafine particles’ are presumed to be substantially identical to nanoparticles in this research.
Also, when it comes to nanoparticles, keep in mind that a sample of a substance containing nanoparticles will often comprise a variety of particle sizes rather than being monodisperse This makes determining the characteristics of the nanoscale considerably more challenging, especially when considering dosages for toxicological investigations. In this Opinion, references to studies of particle exposure and toxicity data will be made often, and the particle size specified in the publications will be quoted as single numbers (e.g. 40 nm) or ranges (e.g. 40 – 80 nm), with the understanding that they will be approximations.
Furthermore, nanoparticles will have a tendency to agglomerate in specific settings. It’s reasonable to anticipate an aggregation of nanoparticles, which may have dimensions measured in microns rather than nanometers, to act differently than individual nanoparticles, but there’s no reason to expect the aggregate to behave like a single huge nanoparticle. Similarly, it is likely that nanoparticle behavior will be influenced by their solubility and susceptibility to degradation, and that neither the chemical composition nor particle size will remain constant over time. With the aforementioned definitions and disclaimers in mind, it’s evident that there are two sorts of nanostructures to evaluate in terms of intrinsic qualities and health risks: those where the structure is a free particle and those where the nanostructure is an essential element of a larger item.
Nanocomposites, which are solid materials with one or more dispersed phases present as nanoscale particles, and nanocrystalline solids, which have individual crystals with nanoscale dimensions, belong to the latter group. . This category also includes things that have been given a surface topography with nanoscale characteristics, as well as functional components with crucial nanometer dimensions, typically electrical components. Surface alterations can be achieved for medicinal applications by utilizing nanosized materials in particular coatings (Roszek et al 2005). This Opinion acknowledges the reality of such materials and products, as well as the fact that material properties on the nanoscale can affect interactions with biological systems. Despite the fast advancement of the study of interactions between biological systems and nano topographical characteristics, little is known about the potential for such interactions to cause harmful consequences. The danger would be related to the release during usage or at the end of the product’s life cycle, and would be determined by the strength of the adhesion to the carrier material. There is currently no reason to believe that immobilized nanoparticles represent a greater risk to health or the environment than larger size materials as long as the nanomaterials are fixed on the carrier’s surface.
The former group, which includes free nanoparticles, is the one that causes the most worry in terms of health hazards, and is the focus of the majority of this Opinion. . The term ‘free’ should be qualified since it indicates that the material in question is made up of individual nanoscale particles at some point during its creation or usage. These individual particles may be mixed into a quantity of another material, which may be a gas, a liquid, or a solid, to generate a paste, a gel, or a coating, in the application of the substance. Although their bioavailability will vary depending on the phase in which they are scattered, these particles may nonetheless be termed free This category would include ultrafine aerosols and colloids, as well as cream-based cosmetics and medicinal preparations, and it is with these instances that much of the current research on nanotechnology health implications has been focused. The main focus of this opinion is on the possible dangers connected with the manufacturing and use of items using engineered nanomaterials. Proteins, phospholipids, lipids, and other biological nanostructures are not considered in this context
“I don’t care that they stole my idea, I care that they don’t have any of their own”, said by one of the greatest inventors to have ever lived, the Serbian inventor Nikola Tesla who developed the framework for modern-day electrical engineering. When Nikola Tesla began work at Edison’s DC (direct current) power plant in the United States, his new employer was not interested in his ideas for a new type of power called AC (alternating current). At the time DC was the only electrical supply, but it could only be transmitted across short distances before it lost power. To Edison, AC sounded like competition and he persuaded Tesla to work on improving his DC system by offering him a huge sum of money. But when Tesla had done what he had been asked, Edison reneged on his promise. Tesla resigned and returned to his AC power concepts. DC power is constant and moves in one direction and the resistance in wires causes it to lose power over distance. AC power does not have this problem as it varies in current so the resistance is less, and yet it varies in current so the resistance is less, and yet it delivers the same amount of power.

How AC current works?
In an atom, the negatively charged electrons are bound to the nucleus due to their electromagnetic attraction to the oppositely charged nucleus. But the electrons in the outer most shell called valence shell can sometimes become free due to external forces. These electrons that escape from the valence shell are called free electrons and they can move from one atom to another. This movement is called charge and the flow of electric charge is called electricity. Materials that allows many electrons to move freely are conductors and don’t allow are called insulators. That why copper is a great conductor. Alternating current would flow back and forth 50-60 times per second, this is called the frequency. Even though Thomas Edison one of the famous and powerful men of the 19th century, he tried his best to compete with Tesla. The mathematic formula of the current is P = I×V, with this formula, the same amount of power can be transmitted either at high current and low voltage or low current and high voltage. But when you transmit current through wires, there will be also loss of heat. To overcome this problem, we have to higher the voltage to reduce the heat loss.
In modern electric power grids, electricity is transmitted at hundreds of thousands of volts. But the voltage cannot be this high when it arrives at your home. So a transformer steps down this high voltage to typically between 100 and 240 volts. The step down process of AC current is way easier than the DC current. Transformers require a time varying voltage to function, and since direct current is constant, and only alternating current is time varying, transformers like these only work with AC electricity. In Edison and Tesla’s time, there was no easy way to transform voltage with direct current. And this is the primary reason Tesla’s AC won out over Edison’s DC in the early era of electrical transmission.


AC current – a scientific breakthrough
This made AC power more cost effective, as fewer power plants were needed. Entrepreneur George Westinghouse saw the potential of Tesla’s AC power and bought his patents for AC motors. Edison began a propaganda war in an attempt to keep DC power on top, but it was inevitable that ac power would win. Almost all electricity is now delivered as Tesla’s AC power. Edison’s place in history as an inventor and electrician is secure. But in many ways Tesla went even further. He envisioned fluorescent lights, technology of the radio, and remote control. Nikola Tesla was one of the most forward thinking, and dynamic visionaries that ever lived.
“If your hate could be turned into electricity, it would light up the whole world”. – Nikola Tesla
India is a place where one can visit any area for many purposes such as general tourism, medical tourism, religious tourism, games and sports tourism, educational tourism etc. On 11 January 2022, I had the opportunity to visit a wonderful place located about 80 kilometres away from Hyderabad (from my residence of Suncity, Hyderabad) known as Yadagirigutta in Telangana. I am presenting a few lines about the place based on secondary sources and also later on my observations.
Yadagirigutta is a temple town as the famous Lakshmi Narasimha Temple is situated here. It is situated around 16 kilometres away from the district headquarters Bhuvanagiri and 55 kilometres away from Uppal, a major suburb of Hyderabad and already mentioned around 80 kilometres away from Suncity of Hyderabad. It is pertinent to mention that Hyderabad Regional Ring Road passes through Yadagirigutta (wikipedia.org/wiki/Yadagirigutta). Thousands of people visit the place every day. According to the website, yadadri.telangana.gov.in/tourist-place/yadagirigutta, five thousand to eight thousand people everyday visit for pujas, weddings, other family rituals etc. The number of visitors increases significantly on weekends, holidays and festivals. Further, in the context of its name few points are highlighted from the website, (yadadri.telangana.gov.in), “according to the myths of the Third Age, there was a sage named Yadarshi, who was the son of the great sage Sri Rishyasringa Maharshi and Santa Devi. He meditated inside the cave with the gaze of Sri Anjaneya Swami. Sri Narasimha Swami appeared before him, pleased with his devotion. The Swami himself manifested himself in five different forms as Sri Jwala Narasimha, Sri Gandabherunda, Sri Yogananda, Sri Ugra and Sri Lakshminarasimha after Swami and is therefore worshipped as the Pancharama Narasimha Kshetra. The Sudarshan Chakra is a guide for the devotees towards the temple. In the 15th century, the great king of Vijayanagara, Sri Krishnadevaraya, mentioned in his autobiography about the temple that before going to war he would always visit the temple and pray to the Lord for victory. The town is well connected to the capital and the nearest major towns by the Ghat Road. This temple is very popular in the Telangana region”.
I was highly fascinated to see the beauty of the place as from the top place the view was scenic. I observed with my heart and mind, the beauty of nature as well as its pristine beauty. The Temple Committee meticulously arranged the visit of the people without any chaos, etc. As revealed, every day thousands of people visit the place to have a glimpse of Bhagawan Narshimha.
Here, I wish to suggest a few things to the Government of Telangana. While taking the Prasadam by paying a little amount, many people have to stand under the scorching heat. So, I suggest a spacious area should be selected with fully covered. Also, I observed only one counter was in operation where tokens were issued (payment counter) and another counter where Prasadam was distributed. Here, my suggestion is that there should be two more counters if not more. One (payment counter and Prasadam counter) should be for the senior citizens and another (payment counter and Prasadam counter) should be for ladies. Because when I visited on 11 March 2022 there was no separate counter either for senior citizens or for ladies. Only one as mentioned already was functioning for all.
Anyway, I congratulate the Government of Telangana for developing the area as a sequel many have got the job, both self-employment and wage- employment. Even eight years ago the place was not at all developed from a tourism point of view.
(I, Shankar Chatterjee, offer my gratitude to T. Sanjeeva Reddy, Legal Adviser by profession, Libdom Villa, Bandlaguda Jagir, Hyderabad for inspiring me in carrying out my academic activities)

According to a 2020 survey by the National Foundation for Credit Counseling, only 47% of Americans use budgeting tools to keep track of their spending. A budget, on the other hand, as the most basic instrument in the financial planning process, might make it easier to meet your financial objectives.
Not only does a budget help you keep track of where your money is going, but it also gives you more control over that process. Without a clear plan for your cash flow, you could be spending against your own best interests without even knowing it.
Budgeting isn’t always enjoyable, but it’s one of the most crucial steps you can do to better your financial situation. Here are a few examples of how living on a budget might help.
– It aligns your spending with your goals: You may decide how you’ll spend your money each month depending on what’s most important to you by setting and sticking to a budget.
– It can improve your debt repayment strategy: If you’re trying to pay off student loans, credit cards, or other types of debt, a budget might help you set aside more money so you can get out of debt.
– It can help you achieve your savings goals: A budget can help you figure out how much you’re going to save toward your goal at the beginning of the month, whether you want to save more for retirement, develop your emergency fund, or put money down for your next vacation.
1. Zero- Based Budget
A zero-based budgeting strategy is straightforward: income minus expenses equals zero.
This budgeting strategy is best for persons who have a fixed monthly income or can at least anticipate their monthly income. Add your monthly spending and savings to equal your monthly income after you’ve calculated your monthly income.
It’s critical to budget for all of your spending as precisely as possible. If you go over budget in one category, you’ll have to make up the difference by taking money from another. And forgetting about a significant expense can throw your budget off.
A zero-based budget may be a better alternative for someone who has been budgeting for some time because there is less space for error. Even so, keeping additional cash in your bank account as a buffer is a wise idea. Also, keep a modest emergency money on hand in case you face a major unexpected bill.
2. Pay-yourself-first budget
Another simple budgeting strategy that focuses on savings and debt reduction is the pay-yourself-first budget.
Simply put, every time you are paid, you set away a particular amount for savings and debt payments, then spend the remainder of your money as you see fit. This allows you to prioritise your savings and debt payback goals while making do with the leftovers.
For instance, you might prioritise paying off high-interest debt first while gradually creating an emergency fund. However, once you’ve paid off your high-interest debt, you may concentrate on other savings goals.
Of course, prioritizing your necessary expenses and obligations is critical. However, because you’ve already taken care of what’s most essential to you, you don’t need to be concerned about where you spend your discretionary spending.
This budget is ideal for someone who has trouble saving each month or doesn’t want to spend too much time planning out each spending.
3. Envelope System Budget
This way of budgeting is similar to the zero-based budget, but there is one major difference: everything is done in cash. An envelope budgeting strategy is planning out how you’ll spend your money each month and using an envelope for each category of spending. Then, according to your budget, you withdraw as much cash as you need to fill each envelope.
Take your grocery envelope with you when you go grocery shopping, for example, and pay for your purchases with cash. If you run out, unless you choose to withdraw cash from other envelopes, that’s all you can spend in that area for the month. However, don’t raid other envelopes too frequently, as this might lead to a snowball effect, and you could run out of money before the end of the month.
The envelope system is endorsed by financial expert Dave Ramsey, so it’s a good alternative for folks who share his money ideals, which emphasize paying down debt rapidly and utilizing cash rather than credit cards.
However, it’s not a smart budgeting approach for someone who doesn’t like having a lot of cash on hand or prefers to use credit or debit cards.
4. 50/30/20 Budget
The 50/30/20 budgeting method is simple and requires less effort than the envelope and zero-based budgeting methods. The goal is to categorize your spending into three groups
The biggest disadvantage is that the 50/30/20 rule may be impossible for people who have a lot of debt or want to save a lot of money because 20% isn’t a lot of money.
However, the good news is that you may tailor it to your own requirements. For example, you might wish to consider raising savings and debt repayments while minimising discretionary and necessary expenses.
To put it another way, don’t get too fixated on the 50/30/20 ratio. Make the concept fit your requirements.
5. The ‘no’ budget
This unique budgeting strategy is totally based on not spending money that you don’t have, as the name implies. Rather than making a budget, you should:
Keep an eye on the balance of your bank account. To keep track of your spending, use a budgeting app or your bank’s online banking or mobile app.
Keep track of when your recurring expenses are due. Keeping a list in a spreadsheet, Microsoft Word document, or on a piece of paper is one way to do this.
Set money aside for savings and additional debt repayments. Increase your automatic monthly debt payments and use automatic transfers from checking to savings wherever possible.
Spend the remainder of your funds without being overdrawn on your account. You’ll be better equipped to determine how much money is remaining after key costs if you keep an eye on your account balance.
While the “no” budget sounds easier than the other techniques we’ve discussed, telling oneself “no” isn’t always easy. This budgeting strategy works best if you’ve shown spending restraint in the past and are confident in your ability to do so again.

Today, there is a plan for everything, but before you can design one, you must first comprehend the fundamentals. For example, understanding marketing ideas is critical if you want to develop a great marketing plan. You can find out the best marketing approach for you by following the five fundamental marketing concepts. Simply put, execution is a critical element in marketing that occurs only after extensive study and strategizing.
The art and process of building, implementing, and maintaining an exchange connection is known as marketing. You start by acquiring clients, then create a relationship with them, and then keep it by meeting their demands. Customers or other businesses can be that customer; thus, marketing can be B2B or B2C depending on the situation. The fundamental goal of marketing, however, remains the same: to develop a relationship with clients and meet their needs by meeting their requirements.
Telecommunications, for example, develops a marketing plan that entices and persuades customers to utilize their phone, message, and internet bundles. When users start using, they are encouraged to rate their service by giving it a star rating.
When a corporation prepares and implements strategies to increase profits by increasing sales, meeting consumer requirements, and outperforming competitors, it is referred to as marketing. The goal is to create a condition that benefits both the customer and the business.
The marketing concept is based on the idea of anticipating and satisfying customer requirements and wants better than competitors. Wealth of Nations, Adam Smith’s work, was the source of the marketing concepts. However, it remained unknown to the rest of the world until the twenty-first century.
To completely comprehend the marketing notion, we must first comprehend needs, desires, and demands.
1. The Production Concept
Customers will be more drawn to products that are easily available and can be acquired for cheaper than rival products of the same kind, according to the manufacturing principle. This concept arose from the rise of early capitalism in the 1950s, when businesses were focused on production efficiency to assure maximum profits and scalability.
This mindset can be beneficial when a company markets in a rapidly growing field, but it also comes with a danger. Businesses that are unduly focused on low-cost production might easily lose touch with client wants and, as a result, lose revenue, despite their low-cost and widely accessible goods.
2. The Product Concept
The product concept is the polar opposite of the production concept in that it assumes that customer buying habits are not influenced by availability or price, and that people value quality, innovation, and performance over low cost. As a result, this marketing approach emphasises product improvement and innovation on a regular basis.
Apple Inc. is a great example of how this principle works. Its target demographic anticipates the company’s new releases with bated breath. Many people will not compromise solely to save money, even if there are off-brand products that perform many of the same functions for a lower price. However, if a marketer relies solely on this idea, he or she may miss out on people who are also influenced by availability and pricing.
3. The Selling Concept
Marketing based on the selling concept focuses on getting the customer to the actual transaction without regard for the client’s wants or product quality – a costly technique. This approach generally overlooks customer satisfaction efforts and rarely results in repeat purchases.
Because a product or service isn’t a need, the selling notion is based on the belief that you must persuade a buyer to acquire it by aggressive promotion of its merits. Soda pop is an example. Have you ever wondered why, despite the brand’s popularity, you keep seeing advertisements for Coca-Cola? Everyone understands what Coke has to offer, but it’s also common knowledge that soda is devoid of nutrients and harmful to one’s health. Coca-Cola understands this, which is why they spend such large sums of money to promote their product.
4. The Marketing Concept
The marketing concept is based on a company’s capacity to compete and maximise revenues by promoting the ways in which it provides customers with higher value than its competitors. It all comes down to knowing your target market, sensing its wants, and efficiently providing those demands. This is referred described as the “customer-first strategy” by many.
Glossier is a well-known example of this type of marketing. The brand recognises that many women are dissatisfied with the way cosmetics affects their skin’s health. Women are also tired up with being instructed what makeup items to use, according to the researchers. With this in mind, Glossier launched a line of skincare and beauty products that not only hydrate the skin but also promote individualism and personal expression through the use of makeup.
5. The Societal Concept
The societal marketing concept is a new one that stresses societal well-being. It’s founded on the premise that, regardless of a company’s sales goals, marketers have a moral responsibility to sell ethically to promote what’s good for people over what people may want. Employees of a corporation live in the communities to which they market, and they should advertise in the best interests of their community.
The fast-food sector is an example of the type of problem that the societal notion seeks to solve. Fast food is in high demand in our society, but it is high in fat and sugar and adds to waste. Despite the fact that the industry is catering to modern consumer wishes, it is harming our health and undermining our society’s goal of environmental sustainability.
The history of rocketry dates back to around 900 C.E., but the use of rockets as highly destructive missiles able to carry large payloads of explosives was not feasible until the late 1930s. War has been the catalyst for many inventions, both benevolent and destructive. The ballistic missile is intriguing because it can be both of these things. It has made possible some of the greatest deeds mankind has ever achieved, and also some of the worst. German Walter Dornberger and his team began developing rockets in 1938, but it was not until 1944 that the first ballistic missile, the aggregate-4 or V-2 rocket, was ready for use. V-2 was used extensively by the Nazis at the end of World War II, primarily as an error weapon against civilian targets. They were powerful and imposing: 46 feet (14m) long, able to reach speeds of around 3,500 miles per hour (5600 kph) and deliver a warhead of around 2,200 pounds (1000 kg) at a range of 200 miles (320 km).

Ballistic missiles follow a ballistic flight path, determined by the brief initial powered phase of the missile’s flight. This is unlike guided missiles, such as cruise missiles, which are essentially unmanned airplanes packed with explosives. This meant that the early V-2 flew inaccurately, so they were of most use in attacking large, city sized targets such as London, Paris, and Antwerp. The Nazi ballistic missile program has had both a great and a terrible legacy. Ballistic missiles such as the V-2 were scaled up to produce intercontinental ballistic missiles with a variety of warheads, but also the craft that have carried people into space. Ballistic missiles may have led us to the point of self destruction, but to venture beyond our atmosphere.
Intercontinental ballistic missiles (ICBM) were first developed by the United States in 1959. It is a guided ballistic missile with a minimum range of 5500 kilometers primarily designed for nuclear weapon. United States, China, France, India, United Kingdom and North Korea are the only countries that have operational ICBMs. The ICBMs has a three stage booster, during the boost phase the rocket get the missile airborne, this phase last around 2 to 5 minutes until the ICBM has reached space. ICBMs have up to three rocket phases with each one ejected or discarded after it burns out. They use either liquid or solid propellant. The Liquid fuel rockets tend to burn longer in the boost phase than the solid propellant. The second phase of the ICBMs is the point where the rocket has reached space, here it continues along is ballistic trajectory. At this point the rocket will be travelling anywhere from 24,140 and 27,360 kilometers an hour. The final phase is the ICBM’s final separation and re- entry into earth’s atmosphere. The nose cone section carrying the warhead separates from the final rocket booster and drops back to earth. If the ICBM has rocket thrusters, those will be used at this point to orient itself towards the target. It is important that ICBMs have adequate heat shields to survive reentry, if not they burn up and fall apart. It’s important to note that although countries have ICBMs, none have ever been fired in anger against another country.

“This third day of October, 1942, is the first of a new era in transportation that of space travel.” – Walter Dornberger
The formal study of light began as an effort to explain vision. Early Greek thinkers associated with a ray emitted from the human eye. A surviving work from Euclid, the Greek geometrician, laid out basic concepts of perspective, using straight lines to show why objects at a distance appear shorter or slower than they actually are. Eleventh-century Islamic scholar Abu Ali al Hasan Ibn Al-Haytham known also by the Latinized name Alhazen revisited the work done by Euclid and Ptolemy and advanced the study of reflection, refraction, and color. He argued that light moves out in all directions from illuminated objects and that vision results when light enters the eye. In the late 16th and 17th centuries, researches including Dutch mathematician Willebrord Snel noticed that light bent as it passed through a lens or fluid. Although he believed the speed of light to be infinite, Danish astronomer Ole Romar in 1676 used telescopic observations of Jupiter moons to estimate the speed of light as 140,000 miles a second. Around the same time, Sir Isaac Newton used prisms to demonstrate that white light could be separated into a spectrum of basics colors. He believed that light was made of particles, where as Dutch mathematician Christiaan Huygens described light as a wave.

The particle versus the wave debate advanced in the 1800s. English physician Thomas young’s experiments with vision suggested wavelike behavior, since sources of light seemed to cancel out or reinforce each other. Scottish physicist James Clerk Maxell’s research united the forces of electromagnetism fell along a single spectrum. Te arrival of quantum physics in late 19th and early 20th century prompted the next leap in understanding light. By studying the emission of electrons from a grid hit by a beam of light known as the photoelectric effect Albert Einstein concluded that light came from what he called photons, emitted as electrons changed their orbit around an atomic nucleus and then jumped back to their original state. Through Einstein’s finding seemed to favor the particle theory of light, further experiments showed that light and matter itself behave both as waves and as particles.
How do lasers work?
Einstein’s work on the photoelectric effect led to the laser, an acronym for “light amplification by stimulated emission radiation.” As electrons are exited from one quantum state to another, they emit a single photon when jumping back. But Einstein predicted that when an already excited atom was hit with the right type of stimulus, it would give off two identical photons. Subsequent experiments showed that certain source materials, such as ruby, not only did that but also emitted photons that were perfectly coherent-not scattered like the emissions of a flashlight, but all of the same wavelength and amplitude. These powerfully focused beams are now common-place, found in grocery store scanners, handheld pointers, and cutting instruments from the hospital operating room to the shop floors of heavy industry.

Future trends in fiber optics communication
Fiber optics communication is definitely the future of data communication. The evolution of fiber optic communication has been driven by advancement in technology and increased demand for fiber optic communication. It is expected to continue into the future, with the development of new and more advanced communication technology.
Another future trend will be the extension of present semiconductor lasers to a wider variety of lasing wavelengths. Shorter wavelength lasers with very high input powers are of interest in some high density optical applications. Presently, laser sources which are spectrally shaped through chirp managing to compensate for chromatic dispersion are available. Chirp managing means that the laser is controlled such that it undergoes a sudden change in its wavelength when firing a pulse, such that the chromatic dispersion experienced by the pulse is reduced. There is need to develop instruments to be used to characterize such lasers. Also, single mode tunable lasers are of great importance for future coherent optical systems. These tunable lasers laser in a single longitudinal mode that can be tuned to a range of different frequencies.
“Music is the arithmetic of sounds as optics is the geometry of light.” – Claude Debussy
You must be logged in to post a comment.