Artificial Intelligence Applications in Public Transport

Daily writing prompt
List your top 5 favorite fruits.

By Shashikant Nishant Sharma

Artificial Intelligence (AI) is revolutionizing various sectors, and public transport is no exception. With the ability to process vast amounts of data and make real-time decisions, AI is enhancing the efficiency, safety, and convenience of public transportation systems worldwide. Here are some of the key applications of AI in public transport:

1. Predictive Maintenance

AI-driven predictive maintenance systems use data from sensors placed on vehicles and infrastructure to predict when a part is likely to fail. This proactive approach allows for maintenance to be performed before breakdowns occur, reducing downtime and improving reliability. By analyzing patterns and trends, AI can forecast potential issues, ensuring that vehicles are always in optimal condition.

2. Traffic Management

AI algorithms are being used to manage traffic flow in real-time. By analyzing data from traffic cameras, sensors, and GPS devices, AI can adjust traffic light timings, reroute buses, and provide real-time updates to commuters. This helps to reduce congestion, minimize delays, and enhance the overall efficiency of the public transport network.

3. Autonomous Vehicles

Self-driving buses and trains are one of the most exciting applications of AI in public transport. Autonomous vehicles can operate with precision, adhere to schedules, and reduce human error. Pilot programs for autonomous buses are already underway in several cities, promising a future where public transport is not only more efficient but also safer and more reliable.

4. Smart Ticketing and Payment Systems

AI-powered ticketing systems are simplifying the payment process for passengers. Using machine learning algorithms, these systems can provide dynamic pricing based on demand, offer personalized travel recommendations, and streamline fare collection. Contactless payment options and mobile ticketing apps enhance the convenience for users, reducing the need for physical tickets and cash transactions.

5. Route Optimization

AI can analyze vast amounts of data to determine the most efficient routes for public transport vehicles. This includes considering factors such as traffic conditions, passenger demand, and historical data. By optimizing routes, AI helps in reducing travel time, lowering fuel consumption, and improving the overall service quality for passengers.

6. Passenger Information Systems

AI enhances passenger information systems by providing real-time updates on schedules, delays, and disruptions. Chatbots and virtual assistants powered by AI can answer passenger queries, provide travel recommendations, and assist with trip planning. These systems improve the passenger experience by ensuring that they have access to accurate and timely information.

7. Safety and Security

AI is playing a crucial role in improving safety and security in public transport. Surveillance systems equipped with AI can detect unusual behavior, monitor crowd density, and identify potential threats. Facial recognition technology can be used to enhance security measures, ensuring that public transport systems remain safe for all users.

8. Energy Efficiency

AI can optimize the energy consumption of public transport vehicles. By analyzing data on fuel usage, driving patterns, and environmental conditions, AI systems can suggest ways to reduce energy consumption and emissions. This not only lowers operational costs but also contributes to a more sustainable and environmentally friendly public transport system.

9. Accessibility

AI applications are making public transport more accessible to individuals with disabilities. AI-powered apps can provide real-time information on accessible routes, help with navigation, and even assist with boarding and alighting from vehicles. This ensures that public transport is inclusive and caters to the needs of all passengers.

Conclusion

The integration of AI into public transport systems is transforming the way we travel. From improving operational efficiency and safety to enhancing the passenger experience, AI is paving the way for smarter, more reliable, and more sustainable public transport. As AI technology continues to advance, we can expect even more innovative applications that will further revolutionize the public transport industry.

References

Costa, V., Fontes, T., Costa, P. M., & Dias, T. G. (2015). Prediction of journey destination in urban public transport. In Progress in Artificial Intelligence: 17th Portuguese Conference on Artificial Intelligence, EPIA 2015, Coimbra, Portugal, September 8-11, 2015. Proceedings 17 (pp. 169-180). Springer International Publishing.

Jevinger, Å., Zhao, C., Persson, J. A., & Davidsson, P. (2024). Artificial intelligence for improving public transport: a mapping study. Public Transport16(1), 99-158.

Kouziokas, G. N. (2017). The application of artificial intelligence in public administration for forecasting high crime risk transportation areas in urban environment. Transportation research procedia24, 467-473.

Lodhia, A. S., Jaiswalb, A., & Sharmac, S. N. (2023). An Investigation into the Recent Developments in Intelligent Transport System. In Proceedings of the Eastern Asia Society for Transportation Studies (Vol. 14).

Okrepilov, V. V., Kovalenko, B. B., Getmanova, G. V., & Turovskaj, M. S. (2022). Modern trends in artificial intelligence in the transport system. Transportation Research Procedia61, 229-233.

Sharma, S. N., Dehalwar, K., & Singh, J. (2023). Cellular Automata Model for Smart Urban Growth Management.

Ushakov, D., Dudukalov, E., Shmatko, L., & Shatila, K. (2022). Artificial Intelligence as a factor of public transportations system development. Transportation Research Procedia63, 2401-2408.

How to Collect Data for Binary Logit Model

Daily writing prompt
Share a story about someone who had a positive impact on your life.

By Kavita Dehalwar

Collecting data for a binary logit model involves several key steps, each crucial to ensuring the accuracy and reliability of your analysis. Here’s a detailed guide on how to gather and prepare your data:

1. Define the Objective

Before collecting data, clearly define what you aim to analyze or predict. This definition will guide your decisions on what kind of data to collect and the variables to include. For a binary logit model, you need a binary outcome variable (e.g., pass/fail, yes/no, buy/not buy) and several predictor variables that you hypothesize might influence the outcome.

2. Identify Your Variables

  • Dependent Variable: This should be a binary variable representing two mutually exclusive outcomes.
  • Independent Variables: Choose factors that you believe might predict or influence the dependent variable. These could include demographic information, behavioral data, economic factors, etc.

3. Data Collection Methods

There are several methods you can use to collect data:

  • Surveys and Questionnaires: Useful for gathering qualitative and quantitative data directly from subjects.
  • Experiments: Design an experiment to manipulate predictor variables under controlled conditions and observe the outcomes.
  • Existing Databases: Use data from existing databases or datasets relevant to your research question.
  • Observational Studies: Collect data from observing subjects in natural settings without interference.
  • Administrative Records: Government or organizational records can be a rich source of data.

4. Sampling

Ensure that your sample is representative of the population you intend to study. This can involve:

  • Random Sampling: Every member of the population has an equal chance of being included.
  • Stratified Sampling: The population is divided into subgroups (strata), and random samples are drawn from each stratum.
  • Cluster Sampling: Randomly selecting entire clusters of individuals, where a cluster forms naturally, like geographic areas or institutions.

5. Data Cleaning

Once collected, data often needs to be cleaned and prepared for analysis:

  • Handling Missing Data: Decide how you’ll handle missing values (e.g., imputation, removal).
  • Outlier Detection: Identify and treat outliers as they can skew analysis results.
  • Variable Transformation: You may need to transform variables (e.g., log transformation, categorization) to fit the model requirements or to better capture the nonlinear relationships.
  • Dummy Coding: Convert categorical independent variables into numerical form through dummy coding, especially if they are nominal without an inherent ordering.

6. Data Splitting

If you are also interested in validating the predictive power of your model, you should split your dataset:

  • Training Set: Used to train the model.
  • Test Set: Used to test the model, unseen during the training phase, to evaluate its performance and generalizability.

7. Ethical Considerations

Ensure ethical guidelines are followed, particularly with respect to participant privacy, informed consent, and data security, especially when handling sensitive information.

8. Data Integration

If data is collected from different sources or at different times, integrate it into a consistent format in a single database or spreadsheet. This unified format will simplify the analysis.

9. Preliminary Analysis

Before running the binary logit model, conduct a preliminary analysis to understand the data’s characteristics, including distributions, correlations among variables, and a preliminary check for potential multicollinearity, which might necessitate adjustments in the model.

By following these steps, you can collect robust data that will form a solid foundation for your binary logit model analysis, providing insights into the factors influencing your outcome of interest.

References

Cramer, J. S. (1999). Predictive performance of the binary logit model in unbalanced samples. Journal of the Royal Statistical Society: Series D (The Statistician)48(1), 85-94.

Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Horowitz, J. L., & Savin, N. E. (2001). Binary response models: Logits, probits and semiparametrics. Journal of economic perspectives15(4), 43-56.

Singh, D., Das, P., & Ghosh, I. (2024). Driver behavior modeling at uncontrolled intersections under Indian traffic conditions. Innovative Infrastructure Solutions9(4), 1-11.

Tranmer, M., & Elliot, M. (2008). Binary logistic regression. Cathie Marsh for census and survey research, paper20.

Wilson, J. R., Lorenz, K. A., Wilson, J. R., & Lorenz, K. A. (2015). Standard binary logistic regression model. Modeling binary correlated responses using SAS, SPSS and R, 25-54.

Young, R. K., & Liesman, J. (2007). Estimating the relationship between measured wind speed and overturning truck crashes using a binary logit model. Accident Analysis & Prevention39(3), 574-580.

Regression Analysis: A Powerful Statistical Tool for Understanding Relationships

Daily writing prompt
Do you have a quote you live your life by or think of often?

By Kavita Dehalwar

Photo by RF._.studio on Pexels.com

Regression analysis is a widely used statistical technique that plays a crucial role in various fields, including social sciences, medicine, and economics. It is a method of modeling the relationship between a dependent variable and one or more independent variables. The primary goal of regression analysis is to establish a mathematical equation that best predicts the value of the dependent variable based on the values of the independent variables.

How Regression Analysis Works

Regression analysis involves fitting a linear equation to a set of data points. The equation is designed to minimize the sum of the squared differences between the observed values of the dependent variable and the predicted values. The equation takes the form of a linear combination of the independent variables, with each independent variable having a coefficient that represents the change in the dependent variable for a one-unit change in that independent variable, while holding all other independent variables constant.

Types of Regression Analysis

There are several types of regression analysis, including linear regression, logistic regression, and multiple regression. Linear regression is used to model the relationship between a continuous dependent variable and one or more independent variables. Logistic regression is used to model the relationship between a binary dependent variable and one or more independent variables. Multiple regression is used to model the relationship between a continuous dependent variable and multiple independent variables.

Interpreting Regression Analysis Results

When interpreting the results of a regression analysis, there are several key outputs to consider. These include the estimated regression coefficient, which represents the change in the dependent variable for a one-unit change in the independent variable; the confidence interval, which provides a measure of the precision of the coefficient estimate; and the p-value, which indicates whether the relationship between the independent and dependent variables is statistically significant.

Applications of Regression Analysis

Regression analysis has a wide range of applications in various fields. In medicine, it is used to investigate the relationship between various risk factors and the incidence of diseases. In economics, it is used to model the relationship between economic variables, such as inflation and unemployment. In social sciences, it is used to investigate the relationship between various social and demographic factors and social outcomes, such as education and income.

Key assumptions of regression analysis are:

  1. Linearity: The relationship between the independent and dependent variables should be linear.
  2. Normality: The residuals (the differences between the observed values and the predicted values) should be normally distributed.
  3. Homoscedasticity: The variance of the residuals should be constant (homogeneous) across all levels of the independent variables.
  4. No multicollinearity: The independent variables should not be highly correlated with each other.
  5. No autocorrelation: The residuals should be independent of each other, with no autocorrelation.
  6. Adequate sample size: The number of observations should be greater than the number of independent variables.
  7. Independence of observations: Each observation should be independent and unique, not related to other observations.
  8. Normal distribution of predictors: The independent variables should be normally distributed.

Verifying these assumptions is crucial for ensuring the validity and reliability of the regression analysis results. Techniques like scatter plots, histograms, Q-Q plots, and statistical tests can be used to check if these assumptions are met.

Conclusion

Regression analysis is a powerful statistical tool that is widely used in various fields. It is a method of modeling the relationship between a dependent variable and one or more independent variables. The results of a regression analysis can be used to make predictions about the value of the dependent variable based on the values of the independent variables. It is a valuable tool for researchers and policymakers who need to understand the relationships between various variables and make informed decisions.

References

  1. Regression Analysis – ResearchGate. (n.d.). Retrieved from https://www.researchgate.net/publication/303…
  2. Regression Analysis – an overview ScienceDirect Topics. (n.d.). Retrieved from https://www.sciencedirect.com/topics/social-sciences/regression-analysis
  3. Understanding and interpreting regression analysis. (n.d.). Retrieved from https://ebn.bmj.com/content/24/4/1163 The clinician’s guide to interpreting a regression analysis Eye – Nature. (n.d.). Retrieved from https://www.nature.com/articles/s41433-022-01949-z
  4. Regression Analysis for Prediction: Understanding the Process – PMC. (n.d.). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2845248/
  5. An Introduction to Regression Analysis – Chicago Unbound. (n.d.). Retrieved from https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1050&context=law_and_economics
  6. Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Understanding the Principal Component Analysis (PCA)

Daily writing prompt
What is your favorite holiday? Why is it your favorite?

By Shashikant Nishant Sharma

Principal Component Analysis (PCA) is a powerful statistical technique used for dimensionality reduction while retaining most of the important information. It transforms a large set of variables into a smaller one that still contains most of the information in the large set. PCA is particularly useful in complex datasets, as it helps in simplifying the data without losing valuable information. Here’s why PCA might have been chosen for analyzing factors influencing public transportation user satisfaction, and the merits of applying PCA in this context:

Photo by Anna Nekrashevich on Pexels.com

Why PCA Was Chosen:

  1. Reduction of Complexity: Public transportation user satisfaction could be influenced by a multitude of factors such as service frequency, fare rates, seat availability, cleanliness, staff behavior, etc. These variables can create a complex dataset with many dimensions. PCA helps in reducing this complexity by identifying a smaller number of dimensions (principal components) that explain most of the variance observed in the dataset.
  2. Identification of Hidden Patterns: PCA can uncover patterns in the data that are not immediately obvious. It can identify which variables contribute most to the variance in the dataset, thus highlighting the most significant factors affecting user satisfaction.
  3. Avoiding Multicollinearity: In datasets where multiple variables are correlated, multicollinearity can distort the results of multivariate analyses such as regression. PCA helps in mitigating these effects by transforming the original variables into new principal components that are orthogonal (and hence uncorrelated) to each other.
  4. Simplifying Models: By reducing the number of variables, PCA allows researchers to simplify their models. This not only makes the model easier to interpret but also often improves the model’s performance by focusing on the most relevant variables.

Merits of Applying PCA in This Context:

  1. Effective Data Summarization: PCA provides a way to summarize the data effectively, which can be particularly useful when dealing with large datasets typical in user satisfaction surveys. This summarization facilitates easier visualization and understanding of data trends.
  2. Enhanced Interpretability: With PCA, the dimensions of the data are reduced to the principal components that often represent underlying themes or factors influencing satisfaction. These components can sometimes be more interpretable than the original myriad of variables.
  3. Improvement in Visualization: PCA facilitates the visualization of complex multivariate data by reducing its dimensions to two or three principal components that can be easily plotted. This can be especially useful in presenting and explaining complex relationships to stakeholders who may not be familiar with advanced statistical analysis.
  4. Focus on Most Relevant Features: PCA helps in identifying the most relevant features of the dataset with respect to the variance they explain. This focus on key features can lead to more effective and targeted strategies for improving user satisfaction.
  5. Data Preprocessing for Other Analyses: The principal components obtained from PCA can be used as inputs for other statistical analyses, such as clustering or regression, providing a cleaner, more relevant set of variables for further analysis.

In conclusion, PCA was likely chosen in the paper because it aids in understanding and interpreting complex datasets by reducing dimensionality, identifying key factors, and avoiding issues like multicollinearity, thereby making the statistical analysis more robust and insightful regarding public transportation user satisfaction.

References

Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley interdisciplinary reviews: computational statistics2(4), 433-459.

Greenacre, M., Groenen, P. J., Hastie, T., d’Enza, A. I., Markos, A., & Tuzhilina, E. (2022). Principal component analysis. Nature Reviews Methods Primers2(1), 100.

Kherif, F., & Latypova, A. (2020). Principal component analysis. In Machine learning (pp. 209-225). Academic Press.

Shlens, J. (2014). A tutorial on principal component analysis. arXiv preprint arXiv:1404.1100.

Wold, S., Esbensen, K., & Geladi, P. (1987). Principal component analysis. Chemometrics and intelligent laboratory systems2(1-3), 37-52.

Introduction to Structural Equation Modeling

Daily writing prompt
When is the last time you took a risk? How did it work out?

By Shashikant Nishant Sharma

Structural Equation Modeling (SEM) is a comprehensive statistical approach used widely in the social sciences for testing hypotheses about relationships among observed and latent variables. This article provides an overview of SEM, discussing its methodology, applications, and implications, with references formatted in APA style.

Introduction to Structural Equation Modeling

Structural Equation Modeling combines factor analysis and multiple regression analysis, allowing researchers to explore the structural relationship between measured variables and latent constructs. This technique is unique because it provides a multifaceted view of the relationships, considering multiple regression paths simultaneously and handling unobserved variables.

Methodology of SEM

The methodology of SEM involves several key steps: model specification, identification, estimation, testing, and refinement. The model specification involves defining the model structure, which includes deciding which variables are to be considered endogenous and exogenous. Model identification is the next step and determines whether the specified model is estimable. Then, the model estimation is executed using software like LISREL, AMOS, or Mplus, which provides the path coefficients indicating the relationships among variables.

Estimation methods include Maximum Likelihood, Generalized Least Squares, or Bayesian estimation depending on the distribution of the data and the sample size. Model fit is then tested using indices like Chi-Square, RMSEA (Root Mean Square Error of Approximation), and CFI (Comparative Fit Index). Model refinement may involve re-specification of the model based on the results obtained in the testing phase.

Above is a visual representation of the Structural Equation Modeling (SEM) technique as used in a scholarly context. The image captures a network diagram on a blackboard and a group of researchers discussing the model.

Applications of SEM

SEM is used across various fields such as psychology, education, business, and health sciences. In psychology, SEM helps in understanding the relationship between latent constructs like intelligence, anxiety, and job performance. In education, it can analyze the influence of teaching methods on student learning and outcomes. In business, SEM is applied to study consumer satisfaction and brand loyalty.

Challenges and Considerations

While SEM is a powerful tool, it comes with challenges such as the need for large sample sizes and complex data handling requirements. Mis-specification of the model can lead to incorrect conclusions, making model testing and refinement critical steps in the SEM process.

Conclusion

Structural Equation Modeling is a robust statistical technique that offers detailed insights into complex variable relationships. It is a valuable tool in the researcher’s toolkit, allowing for the precise testing of theoretical models.

References

  • Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford publications.
  • Schumacker, R. E., & Lomax, R. G. (2016). A beginner’s guide to structural equation modeling (4th ed.). Routledge.
  • Byrne, B. M. (2013). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). Routledge.
  • Hoyle, R. H. (Ed.). (2012). Handbook of structural equation modeling. The Guilford Press.
  • Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). The Guilford Press.

Unveiling the Benefits of Turnitin Software in Academic Writing

Daily writing prompt
Where do you see yourself in 10 years?

By Shashikant Nishant Sharma

In the contemporary landscape of academia, where originality and authenticity reign supreme, Turnitin emerges as a beacon of integrity and excellence. This innovative software has revolutionized the way educators and students approach writing assignments, offering a plethora of benefits that extend far beyond mere plagiarism detection. From enhancing academic integrity to fostering critical thinking skills, Turnitin stands as a formidable ally in the pursuit of scholarly excellence.

Photo by Yan Krukau on Pexels.com

1. Plagiarism Detection and Prevention:

At its core, Turnitin is renowned for its robust plagiarism detection capabilities. By comparing students’ submissions against an extensive database of academic sources, journals, and previously submitted work, Turnitin effectively identifies instances of plagiarism, whether intentional or unintentional. This feature not only promotes academic integrity but also educates students about the importance of citing sources and respecting intellectual property rights.

2. Feedback and Improvement:

Turnitin’s feedback mechanism empowers educators to provide comprehensive and constructive feedback to students. Through its intuitive interface, instructors can highlight areas of concern, offer suggestions for improvement, and commend originality. This personalized feedback loop fosters a culture of continuous improvement, encouraging students to refine their writing skills and refine their understanding of academic conventions.

3. Enhanced Writing Skills:

By encouraging students to submit drafts through Turnitin prior to final submission, educators facilitate the development of essential writing skills. Through the process of revising and refining their work based on Turnitin’s feedback, students hone their ability to articulate ideas clearly, structure arguments logically, and cite sources accurately. This iterative approach to writing cultivates critical thinking skills and equips students with the tools necessary for success in academia and beyond.

4. Deterrent Against Academic Dishonesty:

The mere presence of Turnitin serves as a powerful deterrent against academic dishonesty. Knowing that their work will undergo rigorous scrutiny by Turnitin’s algorithm, students are less inclined to engage in unethical practices such as plagiarism or contract cheating. This proactive approach to academic integrity not only upholds the reputation of educational institutions but also instills a sense of ethical responsibility in students, preparing them for the ethical challenges they may encounter in their professional careers.

5. Data-Driven Insights:

Turnitin generates comprehensive reports that provide educators with valuable insights into students’ writing habits, trends, and areas of weakness. By analyzing these reports, instructors can tailor their teaching strategies to address specific needs, implement targeted interventions, and track students’ progress over time. This data-driven approach to instruction promotes personalized learning and empowers educators to make informed decisions that maximize student success.

6. Streamlined Grading Process:

Incorporating Turnitin into the grading process streamlines workflow for educators, allowing them to efficiently evaluate student submissions, provide feedback, and assign grades within a centralized platform. This seamless integration of assessment and feedback not only saves time but also ensures consistency and fairness in grading practices.

7. Global Reach and Accessibility:

Turnitin transcends geographical boundaries, making it accessible to educators and students worldwide. Whether in traditional classrooms or virtual learning environments, Turnitin’s cloud-based platform facilitates seamless collaboration and communication, enabling educators to engage with students regardless of their location. This global reach fosters a diverse and inclusive academic community, where ideas can be shared, challenged, and refined on a global scale.

In conclusion, Turnitin software has emerged as an indispensable tool in the realm of academic writing, offering a myriad of benefits that extend far beyond plagiarism detection. From promoting academic integrity to fostering critical thinking skills, Turnitin empowers educators and students alike to strive for excellence in scholarly pursuits. By leveraging the innovative features of Turnitin, educational institutions can cultivate a culture of integrity, innovation, and lifelong learning that prepares students for success in the ever-evolving landscape of academia and beyond.

References

Batane, T. (2010). Turning to Turnitin to fight plagiarism among university students. Journal of Educational Technology & Society13(2), 1-12.

Dahl, S. (2007). Turnitin®: The student perspective on using plagiarism detection software. Active Learning in Higher Education8(2), 173-191.

Heckler, N. C., Rice, M., & Hobson Bryan, C. (2013). Turnitin systems: A deterrent to plagiarism in college classrooms. Journal of Research on Technology in Education45(3), 229-248.

Mphahlele, A., & McKenna, S. (2019). The use of turnitin in the higher education sector: Decoding the myth. Assessment & Evaluation in Higher Education44(7), 1079-1089.

Rolfe, V. (2011). Can Turnitin be used to provide instant formative feedback?. British Journal of Educational Technology42(4), 701-710.

Navigating Plagiarism Checking Services for Scholars: A Comprehensive Overview

Daily writing prompt
What strategies do you use to cope with negative feelings?

By Shashikant Nishant Sharma

In the realm of academia, maintaining academic integrity is paramount. Plagiarism, the act of using someone else’s work without proper acknowledgment, undermines the very foundation of scholarly pursuits. To combat this issue, various plagiarism checking services have emerged, offering scholars the means to ensure their work is original and properly cited. In this article, we’ll explore some prominent plagiarism checking services, focusing on Turnitin and others, to understand their features, functionalities, and effectiveness in maintaining academic integrity.

Photo by Yan Krukau on Pexels.com

Turnitin: Turnitin is perhaps one of the most widely recognized plagiarism detection services in academia. It offers a comprehensive platform for educators and students alike to check the originality of academic papers and assignments. Turnitin employs an extensive database of academic content, including journals, publications, and student submissions, to compare the submitted work against.

Key Features:

  1. Database: Turnitin boasts a vast repository of academic content, making it adept at identifying similarities between submitted work and existing sources.
  2. Originality Reports: Users receive detailed reports highlighting any instances of potential plagiarism, along with similarity percentages and links to the original sources.
  3. Feedback and Grading: Educators can provide feedback directly within Turnitin’s interface, facilitating a streamlined grading process while addressing plagiarism concerns.
  4. Integration: Turnitin integrates seamlessly with learning management systems (LMS), making it convenient for educators to incorporate plagiarism checks into their courses.

Limitations:

  1. Subscription-based: Turnitin typically requires a subscription, which may present a financial barrier for individual scholars or institutions with limited budgets.
  2. False Positives: Like any automated system, Turnitin may occasionally flag instances as plagiarism incorrectly, necessitating manual review and verification.

Other Plagiarism Checking Services: While Turnitin is a prominent player in the field, several other plagiarism checking services offer similar functionalities. Some notable alternatives include:

  1. Grammarly: While primarily known as a grammar checking tool, Grammarly also offers plagiarism detection features. It scans text against a vast database of web pages and academic papers to identify potential instances of plagiarism.
  2. Copyscape: Popular among website owners and content creators, Copyscape specializes in detecting duplicate content on the web. While not as comprehensive as Turnitin for academic purposes, it can still be useful for verifying originality.
  3. Plagscan: Plagscan offers a user-friendly interface and customizable settings for plagiarism detection. It allows users to upload documents directly or check web content by entering URLs.

Choosing the Right Tool: Selecting the most suitable plagiarism checking service depends on various factors, including budget, specific requirements, and integration capabilities with existing systems. While Turnitin remains a top choice for academic institutions, alternative services like Grammarly and Copyscape offer valuable features for individual scholars and content creators.

Conclusion: In the pursuit of academic excellence, maintaining integrity and originality in scholarly work is non-negotiable. Plagiarism checking services play a crucial role in upholding these standards by providing scholars with the means to verify the originality of their work and ensure proper attribution to sources. Whether it’s Turnitin, Grammarly, or another tool, leveraging these services empowers scholars to contribute to knowledge dissemination ethically and responsibly in the academic community.

References

Chandere, V., Satish, S., & Lakshminarayanan, R. (2021). Online plagiarism detection tools in the digital age: a review. Annals of the Romanian Society for Cell Biology, 7110-7119.

Chuda, D., & Navrat, P. (2010). Support for checking plagiarism in e-learning. Procedia-Social and Behavioral Sciences2(2), 3140-3144.

Geravand, S., & Ahmadi, M. (2014). An efficient and scalable plagiarism checking system using bloom filters. Computers & Electrical Engineering40(6), 1789-1800.

Naik, R. R., Landge, M. B., & Mahender, C. N. (2015). A review on plagiarism detection tools. International Journal of Computer Applications125(11).

AUTOMATIC TECHNOLOGY-(Artificial Intelligence)

Definition:

The replication of human intelligence functions by machines, particularly computer systems, is known as artificial intelligence. Expert systems, natural language processing, speech recognition, and machine vision are some examples of specific AI applications.

Purpose of Artificial Intelligence:

Machines may learn from experience, adapt to new inputs, and carry out activities similar to those performed by humans thanks to artificial intelligence (AI). Deep learning and natural language processing are prominently utilised in the majority of AI instances you hear about today, including self-driving vehicles and chess-playing computers.

Father of Artificial Intelligence:

One of the most important figures in the industry was John McCarthy. He is referred to be the “father of artificial intelligence” due to his outstanding contributions to computer science and AI. The term “artificial intelligence” was first used by McCarthy in the 1950s. It is “the science and engineering of creating intelligent machines,” according to his definition.

History of Artificial Intelligence:

The origins of artificial intelligence (AI) can be traced back to ancient myths, tales, and legends of man-made creatures that were given intellect or consciousness by master craftsmen. Philosophers’ attempts to characterise human thought as the mechanical manipulation of symbols laid the groundwork for modern artificial intelligence. The programmable digital computer, a device built on the abstract core of mathematical reasoning, was created as a result of this work in the 1940s. A few scientists were motivated to start seriously debating the viability of creating an electronic brain by this device and the concepts that went into creating it.During a workshop held in the summer of 1956 on the campus of Dartmouth College in the United States, the area of AI research was established. Individuals in attendance would go on to spearhead AI research for many years. Several of them claimed that within a generation, a machine will be as intelligent as a human person, and they were given millions of dollars to realise this vision. 

Types of Artificial Intelligence:

The four main categories of AI now recognised .

1. Reactive artificial intelligence

2.Limited memory artificial intelligence

3Theory of mind artificial intelligence

4.Self-aware artificial intelligence

1.Reactive artificial intelligence:

The most fundamental category of unsupervised AI is reactive machines. They can only respond to the conditions that are happening right now, hence the term “reactive,” as they are unable to build memories or use prior experiences to inform present-day decisions.

2.Limited memory artificial intelligence:

Artificial intelligence is one type that has limited memory. It alludes to an AI’s capacity to retain past information and forecasts and use it to inform future predictions. The complexity of ML design increases slightly when memory is constrained.

3.Theory of mind artificial intelligence:

The term “theory of mind” in psychology refers to the idea that humans have ideas, feelings, and emotions that influence their behaviour. Future AI systems must learn to comprehend the fact that everyone has ideas and feelings, including AI objects and human beings. To be able to interact with us, future AI systems will need to be able to adapt their behaviour.

4.Self aware artificial intelligence:

 Self-aware artificial intelligence is nothing but machines and robots performing and thinking like human beings. To be more specific, self-aware AI will be capable of functioning like the human brain.

Applications of Artificial intelligence:

*Personalized Shopping.
*AI-Powered Assistants.
*Fraud Prevention.
*Administrative Tasks Automated to Aid Educators.
*Creating Smart Content.
*Voice Assistants.
*Personalized Learning.
*Autonomous Vehicles.

Artificial intelligence used in computer:

The ability of a computer or robot controlled by a computer to perform tasks that are typically performed by humans because they call for human intelligence and judgement is known as artificial intelligence (AI).

Future of Artificial intelligence:

The Workplace of the Future and Our Everyday Lives with AI. According to a recent research from Grand View Research, the market for artificial intelligence would be worth USD 390.9 billion by 2025. By 2025, the market will expand at a CAGR of 46.2%, according to the forecast.

AI is here to improve our lives

Artificial Intelligence is one of the most exciting creations of the modern world. Recently, Satya Nadella said that AI will help to get people together. He said so while giving a session during the World Economic Forum in Switzerland.

The advancements which we see around us in various fields will get even more advanced with the new improvements. Some of those we can already see in the form of ChatGPT. 

In recent times, AI has been able to change our surrounding in various ways. Various websites now have a bot that can talk to us and resolve our queries. The number of automated systems around us has also increased to a large extent. Automated systems are being used in various fields and places. 

There is also automation even in our house. The various IoT devices that we are buying are also communicating with us with the help of AI. The various assistants are answering our questions with the help of an AI. 

AI as a whole is making various sectors completely independent and the dependence on people is decreasing. Also, tech-related jobs are increasing. So indirectly AI is creating jobs in various fields. 

 AI has been able to make changes to the medical field also. Now, several robots help in performing complex surgeries which are generally difficult for surgeons. The diagnosis of diseases is also getting easier due to AI.

https://unsplash.com/photos/LIlsk-UFVxk

Now we even have several robots that help us in keeping our homes clean. The various cars are also having AI assistance in terms of self-driving technology. In the future, we will also see self-driving vehicles that will take us from one place to another.

AI has also done justice to the security sector. As of now, we have face recognition that scans and notifies the concerned authorities of any findings. The use of AI is now also helpful in terms of various fields like content writing and even coding. Now, computers are capable of writing codes and even rectifying errors in a completed program.

The use of AI has also helped in highly sensitive manufacturing sectors. For example, robots are being used in the automotive painting industry. It helps in removing the smallest of errors and the results are excellent. 

So, the upcoming decade is going to be more interesting. If we consider the fact that the last decade was the one in which technology simply exploded. Now, we have devices in our respective homes that are remote-controlled and can be operated from any were in the world. The Internet has pushed the boundaries with the things that we can do now with our gadgets. Now, we can even take consultations with a doctor over a video call.

The improvements in the features that technology offers us are automated, but there are still improvements that we need. There also needs to be clear that AI is not destroying jobs. The number of fields that get affected by AI also needs to ensure that the people who are using should also depend less on machines.

Artificial Intelligence

Artificial Intelligence is a computer system able to perform tasks that ordinarily require human intelligence. It is the study of ideas that enables computers to do the things that make people seem intelligent.

The art of creating machines that perform functions that require intelligence when performed by people” – Kurzweil

Some examples of AI are self-driving cars, Netflix’s recommendations, conversational bots, smart assistances such as Siri and Alexa.

Basically, there are two types of AI – 

  • Artificial Narrow Intelligence

Artificial Narrow Intelligence (ANI) also known as “Weak” AI, exists in our world today. Narrow AI is goal-oriented and programmed to perform a single task. i.e. facial recognition, google searching, smart assistance, being able to play games. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human cognition and reasoning.

Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques.”

  • Artificial General Intelligence

Artificial general intelligence (AGI), also known as strong AI, is the concept of a machine with general intelligence that mimics human intelligence and behaviors, with the ability to learn and apply its intelligence to solve any problem as humans do in any given situation. It is expected to be able to solve problems and be innovative, imaginative, and creative. 

Applications of AI –

  • Finance

AI has encompassed from chatbot assistants to fraud detection and task automation. Banks use artificial intelligence systems to organize operations, invest in stocks and manage properties. 

  • Social Sites

AI helps social sites to analyze data to find out what’s trending, different hashtags, and patterns. For example, Facebook uses machine learning and AI for serving you interesting content, recognizes your face, and many other tasks.

  • Healthcare

Doctors assess the patient’s health-related data and intimate the risk factors to the customers via the health care devices with the help of artificial machine intelligence. AI has also applications in fields of cardiology (CRG), neurology (MRI), embryology (sonography), complex operations of internal organs, etc.

  • Robotics

AI in robotics helps robots perform crucial tasks with a human-like vision to detect or recognize various objects. The AI in robotics not only helps to learn the model to perform certain tasks but also makes machines more intelligent to act in different scenarios. There are various functions integrated into robots like computer vision, motion control, grasping the objects, and training data to understand physical and logistical data patterns and act accordingly.

  • Heavy Industries

Huge machines involve risk in their manual maintenance and working. So it becomes a necessary part to have an automated AI agent in their operation. Robots have proven effective in jobs that are repetitive.

  • Gaming

AI has also been applied to video games, for example, video game bots, which are designed to stand in as opponents.

There are many applications of AI such as self-driving cars, smart assistance, google maps, and many more. Future applications are expected to bring about enormous changes.

ARTIFICIAL INTELLIGENCE

In our everyday life, we study and see that new innovation is arising step by step. The degree of reasoning had changed. Our researchers have made a few exceptional things. Not many of them are identified with our knowledge. Presently we see that programmed machines, robots, satellites and our cell phones all are instances of man-made brainpower.

At all intricate terms, Artificial Intelligence suggests developing the ability to think and grasp and make decisions in a machine. Man-made thinking is seen as the most reformist kind of programming, and it makes a mind where the PC frontal cortex can think like individuals.

What is Artificial Intelligence?

Man-made mental ability (AI) or “man-made thinking” is a piece of computer programming, which is making machines that can think and work like individuals.

Not many models for this are: acknowledgment of sound or voice, issue dealing with and settling, educating, learning and arranging. It is the information displayed by machines as opposed to the ordinary understanding displayed by individuals and animals.

It expects to make a PC controlled robot or programming that can think likewise as the human mind thinks. Electronic thinking is persistently being set up to make it extraordinary.

In its planning, it is shown understanding from machines, is set up to keep awake with new information sources and perform human-like tasks.

Thusly, by the use of Artificial Intelligence, such a machine is being made. This can team up with its condition and work cautiously on the data got.

In the event that the AI idea, later on, is more grounded, by then, it will take after our partner. In the event that you get an issue by then, you will educate yourself for it.

History of Artificial Intelligence

1950 was in like manner the year when fake knowledge research started. Assessment in AI began with the improvement of electronic PCs and set aside program PCs.

A lot after this, for quite a while, an association couldn’t interface a PC to think or behave like a human mind. Thereafter, an exposure that extraordinarily revived the early headway of AI was made by Norbert Wiener.

He has showed that all innovative lead of people is the outcome of the reaction segment. Another movement toward present-day AI was when Logic Theorist was made. Organized by Newell and Simon in 1955, it is seen as the chief AI program

Father of Artificial Intelligence

After numerous investigations, the person who set up the structure for counterfeit knowledge was the father of AI, John McCarthy, an American analyst. In 1956, he made a get-together “The Dartmouth Summer Research Project on Artificial Intelligence” to furthermore develop the field of AI.

In which every last one of those people who were enthused about machine understanding could participate. The inspiration driving this social affair was to attract the capacity and authority of captivated people to help McCarthy in regards to this task.

In later years the AI Research Center was outlined at Carnegie Mellon University similarly as the Massachusetts Institute of Technology. Close by this, AI moreover went up against numerous hardships. The essential test they faced was to make a system that could deal with an issue gainfully with practically no investigation.

The ensuing test is building a structure that can get to know a task with nobody else. The chief forward jump in man-made cognizance came when a Novel program called General Problem Solver (G.P.S) was made by Newell and Simon in 1957.

Kinds of Artificial Intelligence

Man-made reasoning is assembled into four kinds, Arend Hintze thought about this course of action; the classes are according to the accompanying –

Responsive machines – These machines can react to conditions. A prominent model could be Deep Blue, the IBM chess program. Exceptional, the chess program was won against the notable chess legend Garry Kasparov.

Additionally, such machines need memory. These machines totally can’t use past experiences to teach future people. It separates every single under the sun decision and picks the best.

Confined Memory – These AI structures are good for using past experiences to enlighten future people. Rather than responsive machines, it can make future assumptions subject to encounter. Self-driving or programmed vehicles are an illustration of Artificial Intelligence.

The speculation of the cerebrum – You ought to be stunned to understand that it suggests getting others. It infers that others have their feelings, objectives, needs, and sentiments; this sort of AI doesn’t yet exist.

Care – This is the most raised and most complex level of Artificial Intelligence. Such systems have a sensation of self; likewise, they have care, mindfulness and sentiments. This strategy doesn’t exist yet. This Technique will be a commotion.

Benefits of Artificial Intelligence

Mechanized thinking benefits researchers in monetary angles and law, yet moreover in particular educating, for instance, validness, security, check, and control.

A couple of cases of development, for instance, organization assist with diminishing disease and hardship, making AI the most critical and most conspicuous creation in mankind’s set of experiences. Some critical benefits of AI are according to the accompanying –

Mechanized Assistance – Organizations with an impelled gathering use machines in light of a legitimate concern for individuals to connect with their customers as an assistance gathering or arrangements bunch.

Clinical Applications of AI – One of the main central marks of AI is that it is used in prescription, utilization of man-made cognizance called “radio a medical procedure”. It is correct now used by gigantic clinical relationship in the recuperating movement of “growths”.

Abatement of Errors – Another unbelievable bit of space of Artificial Intelligence is that it can reduce slip-ups and increase the probability of showing up at higher precision.

Conclusion

It is concluded that artificial intelligence is an essential invention of human development. It depends upon the correct usage.

If we use it rightfully for the sake of humanity and development, then it will be a boon for us. We should not use it for losing any other. Our motto should be clear in using artificial intelligence.

Must share your thoughts regarding artificial intelligence below in the comment section. Hope you liked this essay on artificial intelligence.

How to Make Money Blogging

. Online Courses and Workshops

Here at Smart Blogger, we make most of our income from online courses and workshops — over $1 million per year — but we are far from the only successful blog doing this. Most of the people making a lot of money from their blogs are doing it online

Books and Ebooks

Quite a few writers have parlayed their blogging success into a major publishing deal. Mark Manson, for instance, published a in 2015. Millions of readers later, he got a book deal with Harper Collins and went on to sell over 3,000,000 copies in the US alone.

Affiliate Marketing

If you’d like to create some passive income streams from your blog, one of the best choices is affiliate marketing — recommending the services, digital products, and physical products of other companies in exchange for a commission.

Advertising

Normally, we’re not big fans of selling ads on your site. You need roughly a million visitors per year for the large ad networks to take you seriously, and affiliate marketing is almost always more profitable and just as passive.

That being said, some niches like recipes, fashion, and news are hard to monetize through many of the other methods mentioned here, and they get LOTS of page views. In that case, putting a few ads on your site can make sense as a supplementary income source.

Speaking

There are many reasons to start a blog for personal use and only a handful of strong ones for business blogging. Blogging for business, projects, or anything else that might bring you money has a very straightforward purpose – to rank your website higher in Google SERPs, a.k.a. increase your visibility.

As a business, you rely on consumers to keep buying your products and services. As a new business, you rely on blogging to help you get to potential consumers and grab their attention. Without blogging, your website would remain invisible, whereas running a blog makes you searchable and competitive.

There are many reasons to start a blog for personal use and only a handful of strong ones for business blogging. Blogging for business, projects, or anything else that might bring you money has a very straightforward purpose – to rank your website higher in Google SERPs, a.k.a. increase your visibility.

As a business, you rely on consumers to keep buying your products and services. As a new business, you rely on blogging to help you get to potential consumers and grab their attention. Without blogging, your website would remain invisible, whereas running a blog makes you searchable and competitive.

Blogs and websites

Many people still wonder if there is any difference between a blog and a website. What is a blog and what is a website? It’s even more challenging to differentiate between the two today. Many companies are integrating blogs into their websites as well, which further confuses the two.

Artificial Intelligence for the Next Decade

What is artificial intelligence?

Artificial intelligence refers to the capacity of a digital computer or computer-controlled robot to carry out duties normally related to smart beings. The time period is often implemented to the task of growing structures endowed with the intellectual approaches feature of humans, including the ability to reason, find out meaning, generalize, or examine from beyond experience. Artificial intelligence for the next decade are beyond human imagination.

History

Everything began with Alan Turing, making the idea of machines that can think like people. He was the maker of the Turing test that assisted with comprehension if machines could think like people. In the last part of the 1950s, research on AI innovations started, and individuals began planning gadgets that could think and behave like people.

The term Artificial intelligence was authored in 1956. Early AI research during the 1950s investigated points like critical thinking and representative techniques. During the 1960s, the US Department of Defense looked into this kind of work and started preparing PCs to emulate essential human thinking.

Siri, or Alexa are now easily recognized names, but DARPA delivered intelligent individual associates in 2003.

Future of AI

Specialists say the ascent of AI will improve the vast majority off over the course of the following decade, however many have worries about what propels in AI will mean for being human, to be useful and to practice through and through freedom.

The fate of Artificial Intelligence for next decade looks very encouraging and momentous. For one, you will see a flood in the utilization of this innovation in regular daily existence.

With regards to organizations, observing and reconsideration of the current cycles help assemble what’s to come. The future extent of Artificial Intelligence has made ready for shrewd observing, faster input, and improved business lines.

Getting a reasonable perspective on the clients’ psyche can fabricate bits of knowledge, and help you settle on ongoing choices to improve their encounters. The portable applications and different mediums that are utilizing AI will assist you with acknowledging what the clients need. Information driven experiences lead to customized arrangements and improve communications.

There will be more individuals behind Artificial intelligence arrangements. Subsequently, the collaborations will be in a state of harmony with the Artificial Intelligence machines. The joint effort will prompt fruitful commitment and better openness. The future gadgets will have more information to gain from, which will ultimately help them collect better choices.

In the following not many years, those organizations that have not shown interest in AI arrangements yet will begin burning-through this innovation. This features an opportunity for more rivalry and improved business measures.

In the long run, Artificial intelligence will assist with content creation, expanded arrangements, and improve all industry angles.

Relation with human

In reality as we know it where individuals are battling for fundamental medical services needs, AI will ease availability and improved mindfulness. Past information joined with clinical headways can help gauge better and empower quicker fix.

The wide selection of cerebrum machine interfaces will prompt a gigantic extension of human insight and could permit people to address numerous ailments including loss of motion, visual impairment, nervousness and habit.

Computer based intelligence fueled robot work close by human to play out a restricted scope of errands like get together and stacking, and prescient examination sensors keep hardware moving along as expected.

Computerized therapeutics, hand crafted tranquilizes and improved finding are now making medicines more reasonable, open, exact and are assisting people with carrying on with longer and better lives.

Role in changing the world

Toward the beginning of 2020, General Motors and Honda uncovered the Cruise Origin, an electric-controlled driverless vehicle and Waymo, oneself driving gathering inside Google parent Alphabet, as of late opened its robotaxi administration to the overall population in Phoenix, Arizona, offering a help covering a 50-square mile region in the city.

Lately, the precision of facial-acknowledgment frameworks has jumped forward, to where Chinese tech monster Baidu says it can coordinate with faces with 99% exactness, giving the face is clear enough on the video. While police powers in western nations have commonly just tested utilizing facial-acknowledgment frameworks everywhere occasions, in China the specialists are mounting a cross country program to interface CCTV the nation over to facial acknowledgment and to utilize AI frameworks to follow suspects and dubious conduct, and has likewise extended the utilization of facial-acknowledgment glasses by police.

Anticipation and Prediction of AI for next decade

You take a gander at the future extent of AI prior to joining better experiences and improved rationale. The innovation is brilliant with numbers. Notwithstanding, it doesn’t have the innovativeness or the insight to advance.

The specialists anticipated organized computerized reasoning will intensify human adequacy yet additionally undermine human independence, organization and abilities. They talked about the wide-running prospects; that PCs may coordinate or even surpass human insight and abilities on assignments, for example, complex dynamic, thinking and learning, refined examination and example acknowledgment, visual sharpness, discourse acknowledgment and language interpretation. They said “keen” frameworks in networks, in vehicles, in structures and utilities, on ranches and in business cycles will save time, cash and lives and offer freedoms for people to appreciate a more-redid future.

Adverse effect

We have effectively talked about how the innovation utilizes information that you have given, including your name and age, to decide how to customize the answers for you. Advances in Artificial Intelligence can represent a danger to computerized security. Subsequently, most organizations have a great deal of information in regards to you.

The way AI frameworks can classify the human inclinations and cultural imbalances reflected in their preparation information is developing concern. These apprehensions have been borne out by various instances of how an absence of assortment in the information used to prepare such frameworks has negative true outcomes.

With no future for the jobs, individuals should get new abilities or preparing for new undertakings. Many occupation jobs will not get out of date as artificial intelligence propels robotization.

It is important to be aware of the upcoming disruptions with the pace of technology innovation and not just blindly enjoy the benefits that AI brings.

AI

Artificial Intelligence(AI) refers to the simulation of human intelligence in machines that are programmed to mimic human actions and thinking. It is an umbrella term that includes any machine that can be programmed to exhibit human traits like learning and problem-solving. The ideal characteristics of AI are its ability to rationalize and take actions to achieve a specified goal. When hearing AI, people often visualize robots, all thanks to big-budget Hollywood movies and novels like “The Terminator” etc. People often think that AI is a distant future, but they don’t acknowledge how AI has already crept in our daily life. AI can do a variety of subtle tasks like creating a playlist based on your taste in a music app, defeating humans in complex games like Deep Blue, The DeepMind system, self-driving cars, etc.. AI is based on the principle that human intelligence can be programmed such that it is comprehensible to machines. The goals of Artificial intelligence include learning, reasoning, and perception to accomplish tasks that require human intelligence. AI is categorized into three categories, ANI, AGI, AGI. ANI, Artificial Narrow Intelligence embodies a system designed to carry a narrow range of abilities like facial recognition, play chess, provide assistance like SIRI, Google Assitant. AGI, Artificial general intelligence is at par with human intelligence., it can think, understand, and act in a way that is indistinguishable from that of humans. Today’s AI is speculated to be decades away from the AGI. ASI, Artificial superintelligence is hypothetical which would surpass human intelligence and capabilities, and would be self-aware. People often see AI as next big thing in technology but they don’t realize how AI has gotten knitted into our lives. We now rely on AI more than ever. Whether it be asking Siri to ask for directions, ask Alexa to switch on a light, or just listening on Spotify which creates a playlist based on our taste. Many more applications are being devised like transportation, manufacturing units, healthcare, and education. Tesla is making breakthrough discoveries in the field of AI, like self-driving cars, automated manufacturing. AI is being used in journalism too, Bloomberg used cyborg tech to make quick sense of convoluted financial reports. AI is also helping in medical services too, helping to diagnose diseases more accurately than ever, speeding up drug discovery. With so many benefits of AI, many advocate about it and even say that only luddites are worried about safety of AI but it is not so. As said a coin has two sides, thus every pro comes with a con. AI is not benign because it would follow our order and would accomplish the task at any cost without being able to think about the consequences of actions. Humans are affected not only by results but also by the method taken to achieve the result. For e.g. you might ask an AI to eradicate evil, but ultimately it would kill you because humans make errors which may unintentionally cause evil, like in the movie “The Terminator” or you could ask a self-driving car to reach a destination, it would not matter to it if it accomplishes the task by hook or by crook. You could end up and the airport covered in vomit. Although it had accomplished what you asked for, but not what you intended.Thus AI is a risky job, we’ve gotta be more careful with what the experts sitting over there are developing. If the tech could harm even one person, then it won’t be beneficial to humanity no matter how many benefits it might have.