How to Collect Data for Binary Logit Model

Daily writing prompt
Share a story about someone who had a positive impact on your life.

By Kavita Dehalwar

Collecting data for a binary logit model involves several key steps, each crucial to ensuring the accuracy and reliability of your analysis. Here’s a detailed guide on how to gather and prepare your data:

1. Define the Objective

Before collecting data, clearly define what you aim to analyze or predict. This definition will guide your decisions on what kind of data to collect and the variables to include. For a binary logit model, you need a binary outcome variable (e.g., pass/fail, yes/no, buy/not buy) and several predictor variables that you hypothesize might influence the outcome.

2. Identify Your Variables

  • Dependent Variable: This should be a binary variable representing two mutually exclusive outcomes.
  • Independent Variables: Choose factors that you believe might predict or influence the dependent variable. These could include demographic information, behavioral data, economic factors, etc.

3. Data Collection Methods

There are several methods you can use to collect data:

  • Surveys and Questionnaires: Useful for gathering qualitative and quantitative data directly from subjects.
  • Experiments: Design an experiment to manipulate predictor variables under controlled conditions and observe the outcomes.
  • Existing Databases: Use data from existing databases or datasets relevant to your research question.
  • Observational Studies: Collect data from observing subjects in natural settings without interference.
  • Administrative Records: Government or organizational records can be a rich source of data.

4. Sampling

Ensure that your sample is representative of the population you intend to study. This can involve:

  • Random Sampling: Every member of the population has an equal chance of being included.
  • Stratified Sampling: The population is divided into subgroups (strata), and random samples are drawn from each stratum.
  • Cluster Sampling: Randomly selecting entire clusters of individuals, where a cluster forms naturally, like geographic areas or institutions.

5. Data Cleaning

Once collected, data often needs to be cleaned and prepared for analysis:

  • Handling Missing Data: Decide how you’ll handle missing values (e.g., imputation, removal).
  • Outlier Detection: Identify and treat outliers as they can skew analysis results.
  • Variable Transformation: You may need to transform variables (e.g., log transformation, categorization) to fit the model requirements or to better capture the nonlinear relationships.
  • Dummy Coding: Convert categorical independent variables into numerical form through dummy coding, especially if they are nominal without an inherent ordering.

6. Data Splitting

If you are also interested in validating the predictive power of your model, you should split your dataset:

  • Training Set: Used to train the model.
  • Test Set: Used to test the model, unseen during the training phase, to evaluate its performance and generalizability.

7. Ethical Considerations

Ensure ethical guidelines are followed, particularly with respect to participant privacy, informed consent, and data security, especially when handling sensitive information.

8. Data Integration

If data is collected from different sources or at different times, integrate it into a consistent format in a single database or spreadsheet. This unified format will simplify the analysis.

9. Preliminary Analysis

Before running the binary logit model, conduct a preliminary analysis to understand the data’s characteristics, including distributions, correlations among variables, and a preliminary check for potential multicollinearity, which might necessitate adjustments in the model.

By following these steps, you can collect robust data that will form a solid foundation for your binary logit model analysis, providing insights into the factors influencing your outcome of interest.

References

Cramer, J. S. (1999). Predictive performance of the binary logit model in unbalanced samples. Journal of the Royal Statistical Society: Series D (The Statistician)48(1), 85-94.

Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Horowitz, J. L., & Savin, N. E. (2001). Binary response models: Logits, probits and semiparametrics. Journal of economic perspectives15(4), 43-56.

Singh, D., Das, P., & Ghosh, I. (2024). Driver behavior modeling at uncontrolled intersections under Indian traffic conditions. Innovative Infrastructure Solutions9(4), 1-11.

Tranmer, M., & Elliot, M. (2008). Binary logistic regression. Cathie Marsh for census and survey research, paper20.

Wilson, J. R., Lorenz, K. A., Wilson, J. R., & Lorenz, K. A. (2015). Standard binary logistic regression model. Modeling binary correlated responses using SAS, SPSS and R, 25-54.

Young, R. K., & Liesman, J. (2007). Estimating the relationship between measured wind speed and overturning truck crashes using a binary logit model. Accident Analysis & Prevention39(3), 574-580.

Unlocking Insights: The Binary Logit Model Explained

Daily writing prompt
Share a story about someone who had a positive impact on your life.

By Shashikant Nishant Sharma

The binary logit model is a statistical technique widely used in various fields such as economics, marketing, medicine, and political science to analyze decisions where the outcome is binary—having two possible states, typically “yes” or “no.” Understanding the model provides valuable insights into factors influencing decision-making processes.

Key Elements of the Binary Logit Model:

  1. Outcome Variable:
    • This is the dependent variable and is binary. For instance, it can represent whether an individual purchases a product (1) or not (0), whether a patient recovers from an illness (1) or does not (0), or whether a customer renews their subscription (1) or cancels it (0).
  2. Predictor Variables:
    • The independent variables, or predictors, are those factors that might influence the outcome. Examples include age, income, education level, or marketing exposure.
  3. Logit Function:
    • The model uses a logistic (sigmoid) function to transform the predictors’ linear combination into probabilities that lie between 0 and 1. The logit equation typically looks like this:
    𝑝=11+𝑒−(𝛽0+𝛽1𝑋1+𝛽2𝑋2+…+𝛽𝑛𝑋𝑛)p=1+e−(β0​+β1​X1​+β2​X2​+…+βnXn​)1​Here, 𝑝p is the probability of the outcome occurring, and 𝛽𝑖βi​ are the coefficients associated with each predictor variable 𝑋𝑖Xi​.

How It Works:

The graph above illustrates the binary logit model, showing the relationship between the predictor value (horizontal axis) and the predicted probability (vertical axis). This logistic curve, often referred to as an “S-curve,” demonstrates how the logit function transforms a linear combination of predictor variables into probabilities ranging between 0 and 1.

  • The red dashed line represents a probability threshold of 0.5, which is often used to classify the two outcomes: above this threshold, an event is predicted to occur (1), and below it, it’s predicted not to occur (0).
  • The steepest portion of the curve indicates where changes in the predictor value have the most significant impact on the probability.
  • Coefficient Estimation:
    • The coefficients (𝛽β) are estimated using the method of maximum likelihood. The process finds the values that maximize the likelihood of observing the given outcomes in the dataset.
  • Odds and Odds Ratios:
    • The odds represent the ratio of the probability of an event happening to it not happening. The model outputs an odds ratio for each predictor, indicating how a one-unit change in the predictor affects the odds of the outcome.
  • Interpreting Results:
    • Coefficients indicate the direction of the relationship between predictors and outcomes. Positive coefficients suggest that increases in the predictor increase the likelihood of the outcome. Odds ratios greater than one imply higher odds of the event with higher predictor values.

Applications:

  1. Marketing Analysis: Understanding customer responses to a new product or marketing campaign.
  2. Healthcare: Identifying factors influencing recovery or disease progression.
  3. Political Science: Predicting voter behavior or election outcomes.
  4. Economics: Studying consumer behavior in terms of buying decisions or investment choices.

Limitations:

  • Assumptions: The model assumes a linear relationship between the log-odds and predictor variables, which may not always hold.
  • Data Requirements: Requires a sufficient amount of data for meaningful statistical analysis.
  • Model Fit: Goodness-of-fit assessments, such as the Hosmer-Lemeshow test or ROC curves, are crucial for evaluating model accuracy.

Conclusion:

The binary logit model provides a robust framework for analyzing decisions and predicting binary outcomes. By understanding the relationships between predictor variables and outcomes, businesses, researchers, and policymakers can unlock valuable insights to inform strategies and interventions.

References

Cramer, J. S. (1999). Predictive performance of the binary logit model in unbalanced samples. Journal of the Royal Statistical Society: Series D (The Statistician)48(1), 85-94.

Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Singh, D., Das, P., & Ghosh, I. (2024). Driver behavior modeling at uncontrolled intersections under Indian traffic conditions. Innovative Infrastructure Solutions9(4), 1-11.

Tranmer, M., & Elliot, M. (2008). Binary logistic regression. Cathie Marsh for census and survey research, paper20.

Wilson, J. R., Lorenz, K. A., Wilson, J. R., & Lorenz, K. A. (2015). Standard binary logistic regression model. Modeling binary correlated responses using SAS, SPSS and R, 25-54.

Young, R. K., & Liesman, J. (2007). Estimating the relationship between measured wind speed and overturning truck crashes using a binary logit model. Accident Analysis & Prevention39(3), 574-580.

Regression Analysis: A Powerful Statistical Tool for Understanding Relationships

Daily writing prompt
Do you have a quote you live your life by or think of often?

By Kavita Dehalwar

Photo by RF._.studio on Pexels.com

Regression analysis is a widely used statistical technique that plays a crucial role in various fields, including social sciences, medicine, and economics. It is a method of modeling the relationship between a dependent variable and one or more independent variables. The primary goal of regression analysis is to establish a mathematical equation that best predicts the value of the dependent variable based on the values of the independent variables.

How Regression Analysis Works

Regression analysis involves fitting a linear equation to a set of data points. The equation is designed to minimize the sum of the squared differences between the observed values of the dependent variable and the predicted values. The equation takes the form of a linear combination of the independent variables, with each independent variable having a coefficient that represents the change in the dependent variable for a one-unit change in that independent variable, while holding all other independent variables constant.

Types of Regression Analysis

There are several types of regression analysis, including linear regression, logistic regression, and multiple regression. Linear regression is used to model the relationship between a continuous dependent variable and one or more independent variables. Logistic regression is used to model the relationship between a binary dependent variable and one or more independent variables. Multiple regression is used to model the relationship between a continuous dependent variable and multiple independent variables.

Interpreting Regression Analysis Results

When interpreting the results of a regression analysis, there are several key outputs to consider. These include the estimated regression coefficient, which represents the change in the dependent variable for a one-unit change in the independent variable; the confidence interval, which provides a measure of the precision of the coefficient estimate; and the p-value, which indicates whether the relationship between the independent and dependent variables is statistically significant.

Applications of Regression Analysis

Regression analysis has a wide range of applications in various fields. In medicine, it is used to investigate the relationship between various risk factors and the incidence of diseases. In economics, it is used to model the relationship between economic variables, such as inflation and unemployment. In social sciences, it is used to investigate the relationship between various social and demographic factors and social outcomes, such as education and income.

Key assumptions of regression analysis are:

  1. Linearity: The relationship between the independent and dependent variables should be linear.
  2. Normality: The residuals (the differences between the observed values and the predicted values) should be normally distributed.
  3. Homoscedasticity: The variance of the residuals should be constant (homogeneous) across all levels of the independent variables.
  4. No multicollinearity: The independent variables should not be highly correlated with each other.
  5. No autocorrelation: The residuals should be independent of each other, with no autocorrelation.
  6. Adequate sample size: The number of observations should be greater than the number of independent variables.
  7. Independence of observations: Each observation should be independent and unique, not related to other observations.
  8. Normal distribution of predictors: The independent variables should be normally distributed.

Verifying these assumptions is crucial for ensuring the validity and reliability of the regression analysis results. Techniques like scatter plots, histograms, Q-Q plots, and statistical tests can be used to check if these assumptions are met.

Conclusion

Regression analysis is a powerful statistical tool that is widely used in various fields. It is a method of modeling the relationship between a dependent variable and one or more independent variables. The results of a regression analysis can be used to make predictions about the value of the dependent variable based on the values of the independent variables. It is a valuable tool for researchers and policymakers who need to understand the relationships between various variables and make informed decisions.

References

  1. Regression Analysis – ResearchGate. (n.d.). Retrieved from https://www.researchgate.net/publication/303…
  2. Regression Analysis – an overview ScienceDirect Topics. (n.d.). Retrieved from https://www.sciencedirect.com/topics/social-sciences/regression-analysis
  3. Understanding and interpreting regression analysis. (n.d.). Retrieved from https://ebn.bmj.com/content/24/4/1163 The clinician’s guide to interpreting a regression analysis Eye – Nature. (n.d.). Retrieved from https://www.nature.com/articles/s41433-022-01949-z
  4. Regression Analysis for Prediction: Understanding the Process – PMC. (n.d.). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2845248/
  5. An Introduction to Regression Analysis – Chicago Unbound. (n.d.). Retrieved from https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1050&context=law_and_economics
  6. Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Understanding the Principal Component Analysis (PCA)

Daily writing prompt
What is your favorite holiday? Why is it your favorite?

By Shashikant Nishant Sharma

Principal Component Analysis (PCA) is a powerful statistical technique used for dimensionality reduction while retaining most of the important information. It transforms a large set of variables into a smaller one that still contains most of the information in the large set. PCA is particularly useful in complex datasets, as it helps in simplifying the data without losing valuable information. Here’s why PCA might have been chosen for analyzing factors influencing public transportation user satisfaction, and the merits of applying PCA in this context:

Photo by Anna Nekrashevich on Pexels.com

Why PCA Was Chosen:

  1. Reduction of Complexity: Public transportation user satisfaction could be influenced by a multitude of factors such as service frequency, fare rates, seat availability, cleanliness, staff behavior, etc. These variables can create a complex dataset with many dimensions. PCA helps in reducing this complexity by identifying a smaller number of dimensions (principal components) that explain most of the variance observed in the dataset.
  2. Identification of Hidden Patterns: PCA can uncover patterns in the data that are not immediately obvious. It can identify which variables contribute most to the variance in the dataset, thus highlighting the most significant factors affecting user satisfaction.
  3. Avoiding Multicollinearity: In datasets where multiple variables are correlated, multicollinearity can distort the results of multivariate analyses such as regression. PCA helps in mitigating these effects by transforming the original variables into new principal components that are orthogonal (and hence uncorrelated) to each other.
  4. Simplifying Models: By reducing the number of variables, PCA allows researchers to simplify their models. This not only makes the model easier to interpret but also often improves the model’s performance by focusing on the most relevant variables.

Merits of Applying PCA in This Context:

  1. Effective Data Summarization: PCA provides a way to summarize the data effectively, which can be particularly useful when dealing with large datasets typical in user satisfaction surveys. This summarization facilitates easier visualization and understanding of data trends.
  2. Enhanced Interpretability: With PCA, the dimensions of the data are reduced to the principal components that often represent underlying themes or factors influencing satisfaction. These components can sometimes be more interpretable than the original myriad of variables.
  3. Improvement in Visualization: PCA facilitates the visualization of complex multivariate data by reducing its dimensions to two or three principal components that can be easily plotted. This can be especially useful in presenting and explaining complex relationships to stakeholders who may not be familiar with advanced statistical analysis.
  4. Focus on Most Relevant Features: PCA helps in identifying the most relevant features of the dataset with respect to the variance they explain. This focus on key features can lead to more effective and targeted strategies for improving user satisfaction.
  5. Data Preprocessing for Other Analyses: The principal components obtained from PCA can be used as inputs for other statistical analyses, such as clustering or regression, providing a cleaner, more relevant set of variables for further analysis.

In conclusion, PCA was likely chosen in the paper because it aids in understanding and interpreting complex datasets by reducing dimensionality, identifying key factors, and avoiding issues like multicollinearity, thereby making the statistical analysis more robust and insightful regarding public transportation user satisfaction.

References

Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley interdisciplinary reviews: computational statistics2(4), 433-459.

Greenacre, M., Groenen, P. J., Hastie, T., d’Enza, A. I., Markos, A., & Tuzhilina, E. (2022). Principal component analysis. Nature Reviews Methods Primers2(1), 100.

Kherif, F., & Latypova, A. (2020). Principal component analysis. In Machine learning (pp. 209-225). Academic Press.

Shlens, J. (2014). A tutorial on principal component analysis. arXiv preprint arXiv:1404.1100.

Wold, S., Esbensen, K., & Geladi, P. (1987). Principal component analysis. Chemometrics and intelligent laboratory systems2(1-3), 37-52.

Exploring Spatial-Temporal Analysis Techniques: Insights and Applications

Daily writing prompt
What are your favorite emojis?

By Shashikant Nishant Sharma

Spatial temporal analysis is an innovative field at the intersection of geography and temporal data analysis, involving the study of how objects or phenomena are organized in space and time. The techniques employed in spatial temporal analysis are crucial for understanding complex patterns and dynamics that vary over both space and time. This field has grown significantly with the advent of big data and advanced computing technologies, leading to its application in diverse areas such as environmental science, urban planning, public health, and more. This article delves into the core techniques of spatial temporal analysis, highlighting their significance and practical applications.

Photo by Monstera Production on Pexels.com

Key Techniques in Spatial Temporal Analysis

1. Time-Series Analysis

This involves statistical techniques that deal with time series data, or data points indexed in time order. In spatial temporal analysis, time-series methods are adapted to analyze changes at specific locations over time, allowing for the prediction of future patterns based on historical data. Techniques such as autoregressive models (AR), moving averages (MA), and more complex models like ARIMA (Autoregressive Integrated Moving Average) are commonly used.

2. Geostatistical Analysis

Geostatistics involves the study and modeling of spatial continuity of geographical phenomena. A key technique in this category is Kriging, an advanced interpolation method that gives predictions for unmeasured locations based on the spatial correlation structures of observed data. Geostatistical models are particularly effective for environmental data like pollution levels and meteorological data.

3. Spatial Autocorrelation

This technique measures the degree to which a set of spatial data may be correlated to itself in space. Tools such as Moran’s I or Geary’s C provide measures of spatial autocorrelation and are essential in detecting patterns like clustering or dispersion, which are important in fields such as epidemiology and crime analysis.

4. Point Pattern Analysis

Point pattern analysis is used to analyze the spatial arrangement of points in a study area, which could represent events, features, or other phenomena. Techniques such as nearest neighbor analysis or Ripley’s K-function help in understanding the distributions and interactions of these points, which is useful in ecology to study the distribution of species or in urban studies for the distribution of features like public amenities.

5. Space-Time Clustering

This technique identifies clusters or hot spots that appear in both space and time, providing insights into how they develop and evolve. Space-time clustering is crucial in public health for tracking disease outbreaks and in law enforcement for identifying crime hot spots. Tools like the Space-Time Scan Statistic are commonly used for this purpose.

6. Remote Sensing and Movement Data Analysis

Modern spatial temporal analysis often incorporates remote sensing data from satellites, drones, or other aircraft, which provide rich datasets over large geographic areas and time periods. Techniques to analyze this data include change detection algorithms, which can track changes in land use, vegetation, water bodies, and more over time. Movement data analysis, including the tracking of animals or human mobility patterns, utilizes similar techniques to understand and predict movement behaviors.

Applications of Spatial Temporal Analysis

  • Environmental Monitoring: Understanding changes in climate variables, deforestation, or pollution spread.
  • Urban Planning: Analyzing traffic patterns, urban growth, and resource allocation.
  • Public Health: Tracking disease spread, determining the effectiveness of interventions, and planning healthcare resources.
  • Disaster Management: Monitoring changes in real-time during natural disasters like floods or hurricanes to inform emergency response and recovery efforts.
  • Agriculture: Optimizing crop rotation, irrigation scheduling, and pest management through the analysis of temporal changes in crop health and environmental conditions.

Conclusion

Spatial temporal analysis provides a robust framework for making sense of complex data that varies across both space and time. As technology evolves and data availability increases, the techniques and applications of this analysis continue to expand, offering profound insights across multiple domains. Whether through improving city planning, enhancing disease surveillance, or monitoring environmental changes, spatial temporal analysis is a pivotal tool in data-driven decision-making processes. As we move forward, the integration of more sophisticated machine learning models and real-time data streams will likely enhance the depth and breadth of spatial temporal analyses even further, opening new frontiers for research and application.

References

Aubry, N., Guyonnet, R., & Lima, R. (1991). Spatiotemporal analysis of complex signals: theory and applications. Journal of Statistical Physics64, 683-739.

Briz-Redón, Á., & Serrano-Aroca, Á. (2020). A spatio-temporal analysis for exploring the effect of temperature on COVID-19 early evolution in Spain. Science of the total environment728, 138811.

Cornilleau-Wehrlin, N., Chauveau, P., Louis, S., Meyer, A., Nappa, J. M., Perraut, S., … & STAFF Investigator Team. (1997). The Cluster spatio-temporal analysis of field fluctuations (STAFF) experiment. The Cluster and Phoenix Missions, 107-136.

Dehalwar, K., & Sharma, S. N. (2023). Fundamentals of Research Writing and Uses of Research Methodologies. Edupedia Publications Pvt Ltd.

Gudmundsson, J., & Horton, M. (2017). Spatio-temporal analysis of team sports. ACM Computing Surveys (CSUR)50(2), 1-34.

Peuquet, D. J., & Duan, N. (1995). An event-based spatiotemporal data model (ESTDM) for temporal analysis of geographical data. International journal of geographical information systems9(1), 7-24.

Patel, R. S., Taneja, S., Singh, J., & Sharma, S. N. (2024). Modelling of Surface Runoff using SWMM and GIS for Efficient Storm Water Management. CURRENT SCIENCE126(4), 463.

Sharma, S. N., Dehalwar, K., & Singh, J. (2023). Cellular Automata Model for Smart Urban Growth Management.

Sharma, S. N. (2019). Review of most used urban growth models. International Journal of Advanced Research in Engineering and Technology (IJARET)10(3), 397-405.

Sharma, S. N. (2023). Understanding Citations: A Crucial Element of Academic Writing.

Sharma, S. N. Leveraging GIS for Enhanced Planning Education.

Introduction to Structural Equation Modeling

Daily writing prompt
When is the last time you took a risk? How did it work out?

By Shashikant Nishant Sharma

Structural Equation Modeling (SEM) is a comprehensive statistical approach used widely in the social sciences for testing hypotheses about relationships among observed and latent variables. This article provides an overview of SEM, discussing its methodology, applications, and implications, with references formatted in APA style.

Introduction to Structural Equation Modeling

Structural Equation Modeling combines factor analysis and multiple regression analysis, allowing researchers to explore the structural relationship between measured variables and latent constructs. This technique is unique because it provides a multifaceted view of the relationships, considering multiple regression paths simultaneously and handling unobserved variables.

Methodology of SEM

The methodology of SEM involves several key steps: model specification, identification, estimation, testing, and refinement. The model specification involves defining the model structure, which includes deciding which variables are to be considered endogenous and exogenous. Model identification is the next step and determines whether the specified model is estimable. Then, the model estimation is executed using software like LISREL, AMOS, or Mplus, which provides the path coefficients indicating the relationships among variables.

Estimation methods include Maximum Likelihood, Generalized Least Squares, or Bayesian estimation depending on the distribution of the data and the sample size. Model fit is then tested using indices like Chi-Square, RMSEA (Root Mean Square Error of Approximation), and CFI (Comparative Fit Index). Model refinement may involve re-specification of the model based on the results obtained in the testing phase.

Above is a visual representation of the Structural Equation Modeling (SEM) technique as used in a scholarly context. The image captures a network diagram on a blackboard and a group of researchers discussing the model.

Applications of SEM

SEM is used across various fields such as psychology, education, business, and health sciences. In psychology, SEM helps in understanding the relationship between latent constructs like intelligence, anxiety, and job performance. In education, it can analyze the influence of teaching methods on student learning and outcomes. In business, SEM is applied to study consumer satisfaction and brand loyalty.

Challenges and Considerations

While SEM is a powerful tool, it comes with challenges such as the need for large sample sizes and complex data handling requirements. Mis-specification of the model can lead to incorrect conclusions, making model testing and refinement critical steps in the SEM process.

Conclusion

Structural Equation Modeling is a robust statistical technique that offers detailed insights into complex variable relationships. It is a valuable tool in the researcher’s toolkit, allowing for the precise testing of theoretical models.

References

  • Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford publications.
  • Schumacker, R. E., & Lomax, R. G. (2016). A beginner’s guide to structural equation modeling (4th ed.). Routledge.
  • Byrne, B. M. (2013). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). Routledge.
  • Hoyle, R. H. (Ed.). (2012). Handbook of structural equation modeling. The Guilford Press.
  • Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). The Guilford Press.

Understanding Negative Binomial Regression: An Overview

Daily writing prompt
How do you use social media?

By Shashikant Nishant Sharma

Negative binomial regression is a type of statistical analysis used for modeling count data, especially in cases where the data exhibits overdispersion relative to a Poisson distribution. Overdispersion occurs when the variance exceeds the mean, which can often be the case in real-world data collections. This article explores the fundamentals of negative binomial regression, its applications, and how it compares to other regression models like Poisson regression.

What is Negative Binomial Regression?

Negative binomial regression is an extension of Poisson regression that adds an extra parameter to model the overdispersion. While Poisson regression assumes that the mean and variance of the distribution are equal, negative binomial regression allows the variance to be greater than the mean, which often provides a better fit for real-world data where the assumption of equal mean and variance does not hold.

Mathematical Foundations

The negative binomial distribution can be understood as a mixture of Poisson distributions, where the mixing distribution is a gamma distribution. The model is typically expressed as:

A random variable X is supposed to follow a negative binomial distribution if its probability mass function is given by:

f(x) = (n + r – 1)C(r – 1) Prqx, where x = 0, 1, 2, ….., and p + q = 1.

Here we consider a binomial sequence of trials with the probability of success as p and the probability of failure as q.

Let f(x) be the probability defining the negative binomial distribution, where (n + r) trials are required to produce r successes. Here in (n + r – 1) trials we get (r – 1) successes, and the next (n + r) is a success.

Then f(x) = (n + r – 1)C(r – 1) Pr-1qn-1.p

f(x) = (n + r – 1)C(r – 1) Prqn

When to Use Negative Binomial Regression?

Negative binomial regression is particularly useful in scenarios where the count data are skewed, and the variance of the data points is significantly different from the mean. Common fields of application include:

  • Healthcare: Modeling the number of hospital visits or disease counts, which can vary significantly among different populations.
  • Insurance: Estimating the number of claims or accidents, where the variance is typically higher than the mean.
  • Public Policy: Analyzing crime rates or accident counts in different regions, which often show greater variability.

Comparing Poisson and Negative Binomial Regression

While both Poisson and negative binomial regression are used for count data, the choice between the two often depends on the nature of the data’s variance:

  • Poisson Regression: Best suited for data where the mean and variance are approximately equal.
  • Negative Binomial Regression: More appropriate when the data exhibits overdispersion.

If a Poisson model is fitted to data that is overdispersed, it may underestimate the variance leading to overly optimistic confidence intervals and p-values. Conversely, a negative binomial model can provide more reliable estimates and inference in such cases.

Implementation and Challenges

Implementing negative binomial regression typically involves statistical software such as R, SAS, or Python, all of which have packages or modules designed to fit these models to data efficiently. One challenge in fitting negative binomial models is the estimation of the dispersion parameter, which can sometimes be sensitive to outliers and extreme values.

Conclusion

Negative binomial regression is a robust method for analyzing count data, especially when that data is overdispersed. By providing a framework that accounts for variability beyond what is expected under a Poisson model, it allows researchers and analysts to make more accurate inferences about their data. As with any statistical method, the key to effective application lies in understanding the underlying assumptions and ensuring that the model appropriately reflects the characteristics of the data.

References

Chang, L. Y. (2005). Analysis of freeway accident frequencies: negative binomial regression versus artificial neural network. Safety science43(8), 541-557.

Hilbe, J. M. (2011). Negative binomial regression. Cambridge University Press.

Ver Hoef, J. M., & Boveng, P. L. (2007). Quasi‐Poisson vs. negative binomial regression: how should we model overdispersed count data?. Ecology88(11), 2766-2772.

Liu, H., Davidson, R. A., Rosowsky, D. V., & Stedinger, J. R. (2005). Negative binomial regression of electric power outages in hurricanes. Journal of infrastructure systems11(4), 258-267.

Yang, S., & Berdine, G. (2015). The negative binomial regression. The Southwest respiratory and critical care chronicles3(10), 50-54.

A Comprehensive Guide to Data Analysis Using R Studio

Daily writing prompt
What job would you do for free?

By Shashikant Nishant Sharma

In today’s data-driven world, the ability to effectively analyze data is becoming increasingly important across various industries. R Studio, a powerful integrated development environment (IDE) for R programming language, provides a comprehensive suite of tools for data analysis, making it a popular choice among data scientists, statisticians, and analysts. In this article, we will explore the fundamentals of data analysis using R Studio, covering essential concepts, techniques, and best practices.

1. Getting Started with R Studio

Before diving into data analysis, it’s essential to set up R Studio on your computer. R Studio is available for Windows, macOS, and Linux operating systems. You can download and install it from the official R Studio website (https://rstudio.com/).

Once installed, launch R Studio, and you’ll be greeted with a user-friendly interface consisting of several panes: the script editor, console, environment, and files. Familiarize yourself with these panes as they are where you will write, execute, and manage your R code and data.

2. Loading Data

Data analysis begins with loading your dataset into R Studio. R supports various data formats, including CSV, Excel, SQL databases, and more. You can use functions like read.csv() for CSV files, read.table() for tab-delimited files, and read_excel() from the readxl package for Excel files.

RCopy code# Example: Loading a CSV file
data <- read.csv("data.csv")

After loading the data, it’s essential to explore its structure, dimensions, and summary statistics using functions like str(), dim(), and summary().

3. Data Cleaning and Preprocessing

Before performing any analysis, it’s crucial to clean and preprocess the data to ensure its quality and consistency. Common tasks include handling missing values, removing duplicates, and transforming variables.

RCopy code# Example: Handling missing values
data <- na.omit(data)

# Example: Removing duplicates
data <- unique(data)

# Example: Transforming variables
data$age <- log(data$age)

Additionally, you may need to convert data types, scale or normalize numeric variables, and encode categorical variables using techniques like one-hot encoding.

4. Exploratory Data Analysis (EDA)

EDA is a critical step in data analysis that involves visually exploring and summarizing the main characteristics of the dataset. R Studio offers a plethora of packages and visualization tools for EDA, including ggplot2, dplyr, tidyr, and ggplotly.

RCopy code# Example: Creating a scatter plot
library(ggplot2)
ggplot(data, aes(x = age, y = income)) + 
  geom_point() + 
  labs(title = "Scatter Plot of Age vs. Income")

During EDA, you can identify patterns, trends, outliers, and relationships between variables, guiding further analysis and modeling decisions.

5. Statistical Analysis

R Studio provides extensive support for statistical analysis, ranging from basic descriptive statistics to advanced inferential and predictive modeling techniques. Common statistical functions and packages include summary(), cor(), t.test(), lm(), and glm().

RCopy code# Example: Conducting a t-test
t_test_result <- t.test(data$income ~ data$gender)
print(t_test_result)

Statistical analysis allows you to test hypotheses, make inferences, and derive insights from the data, enabling evidence-based decision-making.

6. Machine Learning

R Studio is a powerhouse for machine learning with numerous packages for building and evaluating predictive models. Popular machine learning packages include caret, randomForest, glmnet, and xgboost.

RCopy code# Example: Training a random forest model
library(randomForest)
model <- randomForest(target ~ ., data = data)

You can train models for classification, regression, clustering, and more, using techniques such as decision trees, support vector machines, neural networks, and ensemble methods.

7. Reporting and Visualization

R Studio facilitates the creation of professional reports and visualizations to communicate your findings effectively. The knitr package enables dynamic report generation, while ggplot2, plotly, and shiny allow for the creation of interactive and customizable visualizations.

RCopy code# Example: Generating a dynamic report
library(knitr)
knitr::kable(head(data))

Interactive visualizations enhance engagement and understanding, enabling stakeholders to interactively explore the data and insights.

Conclusion

Data analysis using R Studio is a versatile and powerful process that enables individuals and organizations to extract actionable insights from data. By leveraging its extensive ecosystem of packages, tools, and resources, you can tackle diverse data analysis challenges effectively. Whether you’re a beginner or an experienced data scientist, mastering R Studio can significantly enhance your analytical capabilities and decision-making prowess in the data-driven world.

In conclusion, this article has provided a comprehensive overview of data analysis using R Studio, covering essential concepts, techniques, and best practices. Armed with this knowledge, you’re well-equipped to embark on your data analysis journey with R Studio and unlock the full potential of your data.

References

Bhat, W. A., Khan, N. L., Manzoor, A., Dada, Z. A., & Qureshi, R. A. (2023). How to Conduct Bibliometric Analysis Using R-Studio: A Practical Guide. European Economic Letters (EEL)13(3), 681-700.

Grömping, U. (2015). Using R and RStudio for data management, statistical analysis and graphics. Journal of Statistical Software68, 1-7.

Horton, N. J., & Kleinman, K. (2015). Using R and RStudio for data management, statistical analysis, and graphics. CRC Press.

Jaichandran, R., Bagath Basha, C., Shunmuganathan, K. L., Rajaprakash, S., & Kanagasuba Raja, S. (2019). Sentiment analysis of movies on social media using R studio. Int. J. Eng. Adv. Technol8, 2171-2175.

Komperda, R. (2017). Likert-type survey data analysis with R and RStudio. In Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues (pp. 91-116). American Chemical Society.

Photo by Liza Summer on Pexels.com

WOMEN EMPOWERMENT

N kavya

Empowerment stands for giving authority and power to women. Thus, Women’s empowerment refers to empowering women to make their own decisions. It means women should have full equality across all fields, regardless of stereotypes. With higher literacy rates and equal pay for equal work, women can thrive economically and rise out of poverty. Protecting women and girls from violence and abuse while challenging the stigmas against reporting crimes would overall create a much safer society.

The Current State of Gender Equality:

On the World Economic Forum’s Global Gender Gap Index of 2021, India ranks 140th among 153 nations, “becoming the third-worst performer in South Asia.” India fell 28 places from its 2020 rank of 112th. The report cites several reasons for this fall. In terms of political empowerment, the number of female ministers declined from about 23% in 2019 to just 9% in 2021. The female workforce participation rate also decreased “from 24.8% to 22.3%.”
Additionally, the “share of women in senior and managerial positions also remains low.” The report also indicates that women in India earn just one-fifth of what men earn.

Furthermore, “one in four women” endure “intimate violence” at least once in their lifetime. Although India has achieved gender parity in educational attainment, illiteracy rates among women remain high. The report indicates that just 65.8% of women in India are literate in 2021 in comparison to 82.4% of men.


Women also endure inequality concerning land and property rights. A 2016 UNICEF report noted that only 12.7% of properties in India “are in the names of women” despite 77% of women in India depending on agricultural work as a core source of income.

Benefits of Empowering Women in India:

As the majority of India’s population, women represent a significant portion of the nation’s untapped economic potential. As such, empowering women in India through equal opportunities would allow them to contribute to the economy as productive citizens. With higher literacy rates and equal pay for equal work, women can thrive economically and rise out of poverty.

Protecting women and girls from violence and abuse while challenging the stigmas against reporting crimes would overall create a much safer society. Improving the female political representation rate would enable more women to serve as role models for young girls and allow a platform to bring awareness to the issues affecting women in India. Overall, gender equality allows for women to live a better quality of life, allowing them to determine their futures beyond traditional expectations.

Women Of Worth (WOW):

According to its website, “Women Of Worth exists for the growth, empowerment, and safety of girls and women” standing “for justice, equality and change.” WOW began in 2008, created by a group of women who longed for change in a society rife with gender discriminatory practices. Its ultimate vision is “to see women and girls live up to their fullest potential.” With a mission of empowering women in India.

The organization has three focal areas:

1. Advocacy Work: WOW utilizes social media platforms to raise awareness of gender inequality and “change attitudes and behavior.”


2. Training and Health Services: WOW provides training to both men and women in schools, tertiary institutions, and companies on women’s safety and rights. It also presents lectures and “keynote addresses” on the topic. Furthermore, WOW provides counseling sessions to improve mental health.
Rehabilitation and Restoration: WOW offers “counseling, life skills training, and therapy” to children and women who are victims of abuse, neglect, and trafficking.


WOW’s efforts have seen success. The organization helped to rescue 200 girls from abusive backgrounds, providing them with rehabilitation services. WOW also gave 11 girls scholarships to continue their education. WOW provided training on gender equality to about 800 working people and “1500 students” along with “200 parents” and 300 educators.


3. Gender equality is a crucial cornerstone in the advancement of any society or nation as it affects all areas of society from economic growth to education, health, and quality of life. Gender inequality in India is a deep-rooted, complex, and multi-layered issue but it is also an essential battle to overcome to see the fullest potential of the nation.

How are women empowered in India?

The Constitution of India has certain provisions that specifically focus on women’s empowerment and prevents discrimination against women in society. Article 14 talks about equality before the law. Article 15 enables the state to make special provisions for women.

Beti Bachao Beti Padhao Andolan has been launched for creating awareness among the people to educate all girl children in the country. The government successfully promotes this scheme by forming District Task Force and Block Task Force. The scheme was launched in the Panipat district of Haryana on 22 January 2015 with initial funding of Rs. 100 crore. Before the launching of this scheme, the Child Sex Ratio of Panipat was 808 in 2001 and 837 in 2011.
Massive publicity is made about the program in print and electronic media, and the logo of this scheme is very common in government buildings such as pillars of National Highway 44, Panipat District Court, bus stand, and railway station of Panipat district, etc.

Financial independence is important for women’s empowerment. Women, who are educated and earning, are in a much better position in our society as compared to uneducated women workers. Therefore, a scheme called working women hostels has been launched so that safe and convenient accommodation should be provided to working women. The benefit of this scheme is given to every working woman without any distinction of caste, religion, marital status, etc. To take benefit from this scheme, the gross total income of women should not exceed Rs. 50,000 per month in the case of metropolitan cities whereas, in the case of small cities, the gross total income should not exceed Rs. 35,000 per month.

The focus of the government has shifted from women’s development to women-led development. To achieve this goal, the government is working around the clock to maximize women’s access to education, skill training, and institutional credit. MUDRA Yojana ( Micro Units Development and Refinance Agency Ltd ) is one such scheme that was launched on 8 April 2015 in which loans up to Rs. 10 lakh are provided to women entrepreneurs, without any collateral. For instance: A woman namely Kamla daily wage laborer from Panipat has taken a loan of Rs. 45,000 from the State Bank of India to start work in a beauty parlor and she is engaged in gainful employment with dignity now.

Conclusion:

Women must have an equal voice, rights, and opportunities, throughout their lives. Gender equality can make a difference to individual lives and whole communities. Economic and Social Empowerment places women and girls in a stronger position. Women’s and girls Economic Empowerment gives a voice in decison making processes. women also should be given equal rights like men to actually empower them. They need to be strong, aware, and alert every time for their growth and development. The most common challenges are related to the education, poverty, health, and safety of women.

Radio In India

N kavya

Radio broadcasting began in India in 1922. The Government owned radio station All India Radio dominated broadcasting since 1936.

Broadcasting in India actually began about 13 years before AIR came into existence. In June 1923 the Radio Club of Bombay made the first ever broadcast in the country. This was followed by the setting up of the Calcutta Radio Club five months later. The Indian Broadcasting Company (IBC) came into being on July 23, 1927, only to face liquidation in less than three years.

In April 1930, the Indian Broadcasting Service, under the Department of Industries and Labour, commenced its operations on an experimental basis. Lionel Fielden was appointed the first Controller of Broadcasting in August 1935. In the following month Akashvani Mysore, a private radio station was set up. On June 8, 1936, the Indian State Broadcasting Service became All India Radio.

The Central News Organisation (CNO) came into existence in August, 1937. In the same year, AIR came under the Department of Communications and four years later came under the Department of Information and Broadcasting. When India attained independence, there were six radio stations in India, at Delhi, Bombay, Calcutta, Madras, Tiruchirapalli and Lucknow. The following year, CNO was split up into two divisions, the News Services Division (NSD) and the External Services Division (ESD). In 1956 the name AKASHVANI was adopted for the National Broadcaster. The Vividh Bharati Service was launched in 1957 with popular film music as its main component

The phenomenal growth achieved by All India Radio has made it one of the largest media organisations in the world. With a network of 262 radio stations, AIR today is accessible to almost the entire population of the country and nearly 92% of the total area. A broadcasting giant, AIR today broadcasts in 23 languages and 146 dialects catering to a vast spectrum of socio-economically and culturally diverse populace.

Programmes of the External Services Division are broadcast in 11 Indian and 16 foreign languages reaching out to more than 100 countries. These external broadcasts aim to keep the overseas listeners informed about developments in the country and provide a rich fare of entertainment as well.

The News Services Division, of All India Radio broadcasts 647 bulletins daily for a total duration of nearly 56 hours in about 90 Languages/Dialects in Home, Regional, External and DTH Services. 314 news headlines on hourly basis are also being mounted on FM mode from 41 AIR Stations. 44 Regional News Units originate 469 daily news bulletins in 75 languages. In addition to the daily news bulletins, the News Services Division also mounts number of news-based programmes on topical subjects from Delhi and its Regional News Units

AIR operates at present 18 FM stereo channels, called AIR FM Rainbow, targeting the urban audience in a refreshing style of presentation. Four more FM channels called, AIR FM Gold, broadcast composite news and entertainment programmes from Delhi, Kolkata, Chennai and Mumbai. With the FM wave sweeping the country, AIR is augmenting its Medium Wave transmission with additional FM transmitters at Regional stations.

In April 2020, as per a survey by AZ Research PPL, commissioned by the Association of Radio Operators for India (AORI) Radio listenership in India touched a peak of 51 million.

Does radio have a future?

The consoles, connected watches and TV’s that we use every day will be just another way in which radio stations can broadcast and increase their audience numbers. Since its creation, radio has continually evolved with the times

Why Radio is still popular?

Portable and Inexpensive: Radio is portable among many modes of communication. They can be used in cars, stores, and other places, which helps to reach the targeted audience. According to researchers, broadcast radio reaches 99% of the Indian population today.

The Government decision for transition to the digital mode of transmission, AIR is switching from analog to digital in a phased manner. The technology adopted is the Digital Radio Mondiale or DRM. With the target of complete digitization by 2017, the listeners can look forward to highly enhanced transmission quality in the near future.

MICROPHONES

N kavya

A microphone is a device that translates sound vibrations in the air into electronic signals and scribes them to a recording medium or over a loudspeaker. Microphones enable many types of audio recording devices for purposes including communications of many kinds, as well as music vocals, speech and sound recording.

Types Of Microphone

There are three main types microphones based on construction -:

1. Dynamic/Moving coil. 2. Ribbon. 3. Condenser/ capacitor

1. Dynamic / Moving coil

A microphone in which the sound waves cause a movable wire or coil to vibrate in a magnetic field and thus induce a current.

Key Advantages -:

1. Rugged and able to handle high sound pressure levels, like those delivered by a kick drum.
2. Provide good sound quality in all areas of microphone performance.
3. They do not require a power source to run
4. They are relatively cheap

Key disadvantages -:

Heavy microphone diaphragm and wire coil limits the movement of the assembly, which in turn restricts the frequency and transient response of the microphone
Generally not as suitable as condenser microphones for recording instruments with higher frequencies and harmonics, such as a violin.

Dynamic microphones can be used for many applications, produce an excellent sound and are suitably rugged – great for traveling on the road. They are best avoided when recording high-frequency content on an important recording.

For reliable, everyday tasks you will not find a more multifaceted, trustworthy device than a good quality dynamic microphone.

Ribbon -:

A ribbon microphone, also known as a ribbon velocity microphone, is a type of microphone that uses a thin aluminum, duraluminum or nanofilm of electrically conductive ribbon placed between the poles of a magnet to produce a voltage by electromagnetic induction. Ribbon microphones are typically bidirectional, meaning that they pick up sounds equally well from either side of the microphone

Key Adavantages -:

1. Ribbon Microphones are very sensitive and accurate
2. Ribbon microphones have a very low noise
3. Ribbon microphones tend not to pick up lots of background noise
4. Ribbon microphones can be very expensive
5. Ribbon microphones are good to produce a thin and tinny sound

Key disadvantages -:

1. Ribbon microphones can be very large and heavy
2. Ribbon microphones are very sensitive to air movements
3. It is very difficult to achieve a tight polar pattern
4. The ribbon is fragile and susceptible to damage
5. Ribbon microphones are not as popular as dynamic microphones
Ribbon microphones require more maintenance

Ribbon microphones are often described as the most natural-sounding microphones available, and for good reason: they condenser microphones that use a thin ribbon of aluminum foil to pick up sound (instead of a solid diaphragm).

Condenser/ Capacitor Microphones -:

A Condenser capsule is constructed similarly. It consists of a thin membrane in close proximity to a solid metal plate. The membrane or diaphragm, as it is often called, must be electrically conductive, at least on its surface. The most common material is gold-sputtered mylar, but some (mostly older) models employ an extremely thin metal foil.

When sound waves hit the diaphragm, it moves back and forth relative to the solid backplate. In other words, the distance between the two capacitor plates changes. As a result, the capacitance changes to the rhythm of the sound waves. Thus, converted sound into an electrical signal.

Key Adavantages -:

1. They have a Greater Dynamic Range than Ribbon or Dynamic Mics.
2. They Have a Better Frequency Response than Dynamic Mics.
3.They Have a Better Noise Floor than Dynamic or Ribbon Mics.
4. When Hit with Loud Transients, They Generally Sound Snappier than Dynamic or Ribbon Mics.

Key Disadvantages -:

1. The limited number of operating microphones at the same time and place.
2. The limited number of radio channels.
Sound files can use up a lot of computer memory in a device.
3. Voice recognition system software is not as accurate as typing manually.

Condenser microphones are best used to capture vocals and high frequencies. They are also the preferred type of microphone for most studio applications.

Conclusion -:

Microphones are used everywhere, from stage performances, broadcasting, and even talking on the phone. The microphone is a transducer, a machine that changes one form energy to another form of energy. Microphones are an essential part of any audio recording system.

Bitcoin The Future?

N kavya

Bitcoin is a type of digital currency that enables instant payments to anyone. Bitcoin was introduced in 2009. Bitcoin is based on an open-source protocol and is not issued by any central authority. It is an electronic currency created back in January 2009. It is known to be decentralized electronic cash that does not rely on banks. It is possible to send from one user to another on the bitcoin blockchain network without the necessity for mediators. It is primarily used for sending or receiving cash through the internet even to strangers. Bitcoin is also known to be a new type of cash. It is predicted to grow at a rapid pace over the years, along with its value. It is typically purchased as an investment by numerous industries and people.


The central government typically handles bitcoins without specific rules, unlike dollars and euros. It is not owned by a country, individual, or group. Therefore, it reduces the chances of corruption and inflation.

History -:

The origin of Bitcoin is unclear, as is who founded it. A person, or a group of people, who went by the identity of Satoshi Nakamoto are said to have conceptualized an accounting system in the aftermath of the 2008 financial crisis.

Uses -:

1. Originally, Bitcoin was intended to provide an alternative to fiat money and become a universally accepted medium of exchange directly between two involved parties.
2. Fiat money is a government-issued currency that is not backed by a commodity such as gold.
3. It gives central banks greater control over the economy because they can control how much money is printed.
4. Most modern paper currencies, such as the US dollar and Indian Rupee are fiat currencies

Acquiring Bitcoins -:

1. One can either mine a new Bitcoin if they have the computing capacity, purchase them via exchanges, or acquire them in over-the-counter, person-to-person transactions.
2. Miners are the people who validate a Bitcoin transaction and secure the network with their hardware.
3. The Bitcoin protocol is designed in such a way that new Bitcoins are created at a fixed rate.
4. No developer has the power to manipulate the system to increase its profits.
5. One unique aspect of Bitcoin is that only 21 million units will ever be created.
6. A Bitcoin exchange functions like a bank where a person buys and sells Bitcoins with traditional currency. Depending on the demand and supply, the price of a Bitcoin keeps fluctuating.

Bitcoin Regulation -:

The supply of bitcoins is regulated by software and the agreement of users of the system and cannot be manipulated by any government, bank, organization, or individual.Bitcoin was intended to come across as a global decentralised currency, any central authority regulating it would effectively defeat that purpose.It needs to be noted that multiple governments across the world are investing in developing Central Bank Digital Currencies (CBDCs), which are digital versions of national currencies.
The legitimacy of Bitcoins (or cryptocurrencies)

In India -:
In the 2018-19 budget speech, the Finance Minister announced that the government does not consider cryptocurrencies as legal tender and will take all measures to eliminate their use in financing illegitimate activities or as a part of the payment system.
In April 2018, the Reserve Bank of India (RBI) notified that entities regulated by it should not deal in virtual currencies or provide services for facilitating any person or entity in dealing with or settling virtual currencies.
However, the Supreme Court struck down the ban on the trading of virtual currencies (VC) in India, which was imposed by the RBI.
The Supreme Court has held that cryptocurrencies are like commodities and hence they can not be banned.

Possible Reasons for the Rise in the Value of the Bitcoin -:

1. Increased acceptance during the pandemic.
2. Global legitimacy from large players like payments firm PayPal, and Indian lenders like State Bank of India, ICICI Bank, HDFC Bank, and Yes Bank.
3. Some pension funds and insurance funds are investing in Bitcoins.

Bitcoin Transaction -:

Bitcoin address is built from the public key. It is very similar as compared to an email address, anyone can check up and provide bitcoins. The private key is known to be identical to that of an email password since it is possible to send bitcoins with the help of remote access only. That’s why it is essential to keep the private key confidential or hidden. To send bitcoins, it is required to verify to the network that you acquire the private key of that particular address without the private key being revealed. It can be done with a specific mathematics branch referred to as public-key cryptography. The identification of the user possessing bitcoins is known as a public key. The public access and the ID number are very alike. For an individual to send you bitcoins, they require your bitcoin address. It is known to be another version of the public key that can be typed and read effortlessly.

However, the security concern of bitcoin is increasing day by day across the world. Since digital wallets are used to store bitcoins, they might be targeted by hackers as their value increases.

YOUTUBE MARKETING

N kavya

YouTube Marketing is the practice of promoting businesses and products on YouTube’s platform, by uploading valuable videos on a company’s YouTube channel or using YouTube ads. more and more companies are including YouTube as part of their digital marketing strategy.

That’s partly because YouTube as a platform is growing insanely fast. But it’s also because the video is an extremely powerful medium.

The truth about youtube marketing -:

YouTube is an opportunity to get more traffic and customers. YouTube can be a very competitive place. This means you can’t just start uploading videos and expect to see results. Countless “big brands” have jumped into YouTube marketing head-first… with only a handful of views and subscribers to show for it.

The truth is, to succeed on YouTube, you need to have a winning strategy, the ability to create great videos, and the SEO know-how to optimize those videos around keywords and topics that people on YouTube care about.

Why youtube is considered a major market for advertising – :

1. YouTube is the 2nd most-visited website in the world.
2. 2 billion people log in to YouTube every month.
3. 68% of YouTube users state that videos help them make a “purchasing decision”.
4. The number of SMBs advertising on YouTube has doubled over the last 2 years.

What is the main goal of YouTube marketing?

One of your objectives for your YouTube marketing should be to help your customer find you. A catchy slogan or prominent company name throughout the video can keep you on people’s minds long after they’ve seen your message online. They can then do an online search and find you.

Objectives of Youtube marketing –:

YouTube videos should have clear objectives that align with your company goals. focus all of your efforts on gimmicks that will get the attention of viewers and help your video go viral, you may overlook the reason you market on YouTube to get more business. Make sure your attention-getting videos help you move toward your company objectives.

1. Reaching Your Target Customer -:

If your target demographic is women between the ages of 35 and 45, and your video catches on with teenagers, you may be popular, but you won’t be effective. Think about the kinds of images and messages that would appeal to your customer, and make it one of your objectives to use as many of those images as possible.

2. Making It Easy to Find You -:

One of your objectives for your YouTube marketing should be to help your customer find you. A catchy slogan or prominent company name throughout the video can keep you on people’s minds long after they’ve seen your message online. They can then do an online search and find you. You should include a link to your website, along with any other contact information, such as an email address, business address, or phone number. Don’t lose sight of your objective of helping customers contact you.

3. Establishing a Relationship -:

You should evaluate the relationship you want with your customers, and create a video that helps them feel you are one of them. You can convey a sense of trust, lightheartedness, sophistication, down-to-earth values, or even anger, to name a few relationship starters.

4. Keeping Your Product in Mind -:

Don’t get so involved with making an interesting video that you lose sight of your number-one objective: letting people know about your product or service. Feature your product prominently and clearly, so that viewers won’t have to wonder what you are marketing.

Importance of YouTube to Business -:

1. Advertising -: the largest video-sharing website on the Internet, according to NBC, YouTube also doubles as one of the largest video search engines in the world
2. Customer Communication -: YouTube provides an array of channels for businesses to communicate with customers and prospects.
3. Internal Communication -: YouTube provides a convenient and easy-to-use video hosting service, it can serve as an inexpensive way to post instructional videos, announcements, and other internal communications.
4. Complaints –: As a business owner, you should carefully monitor YouTube for customer feedback and complaints.
5. Considerations -: YouTube can offer numerous important benefits to businesses, but you should keep some considerations in mind when using this resource.

Advantages of YouTube Marketing –:

1. Heavy Traffic
2. Higher Visibility on Google
3. Build Your Email List on YouTube
4. Higher Conversion Rates
5. Multiple Video Types
6. Massive Media Library

Disadvantages of Youtube Marketing –:

1. Control
2. Targeting
3. Ad Bypass
4. Auctions
5. Sales Conversion

YouTube provides every business with an insane opportunity to get more traffic and customers. However, it is also a very competitive place as well. This means that you can’t just start uploading videos and expect to see results overnight. Many big businesses jump into YouTube marketing with no strategy – their lack of views and subscribers show for it. The truth is that to succeed on YouTube is not just about creating great videos. It’s knowing how to optimize those videos around keywords that people on YouTube are searching for.

DEPRESSION

N kavya

Depression is a mood disorder that causes a persistent feeling of sadness and loss of interest, also called major depressive disorder or clinical depression. It affects how you feel, think, and behave and can lead to a variety of emotional and physical problems. Depression is not a weakness; you cannot simply “snap out “of it. Depression may require long-term treatment. But we should not feel discouraged because most people with depression feel better with medication, psychotherapy, or both.

Let us see know about the symptoms of depression –:

• Feelings of sadness, tearfulness, emptiness, or hopelessness
• Angry outbursts, irritability or frustration, even over small matters
• Loss of interest or pleasure in most or all normal activities, in their hobbies or sports
• Sleep disturbances, including insomnia or sleeping too much
• Tiredness and lack of energy, so even small tasks take extra effort
• Reduced appetite and weight loss or increased cravings for food and weight gain
• Anxiety, agitation, or restlessness
• Slowed thinking, speaking, or body movements
• Feelings of worthlessness or guilt fixating on past failures or self-blame
• Trouble thinking, concentrating, making decisions, and remembering things
• Frequent or recurrent thoughts of death, suicidal thoughts, suicide attempts, or suicide
• Unexplained physical problems, such as back pain or headaches.

People dealing with depression may occur only once during their life, people typically have multiple episodes, and during these episodes, symptoms occur most of the day, nearly every day which also affects their day-to-day activities, such as work, school, social activities, or relationships with others. Some people might even feel generally miserable without really knowing the exact reason.

• Depression in children and teens may include sadness, irritability, clinginess, worry, aches, pains, being extremely sensitive, feeling misunderstood, anger, and poor performance.
• Depression in symptoms in older adults may include memory, difficulties or personality changes, fatigue, and often wanting to stay at home, rather than go out to socialize or do new things.

Causes of depression –:

• Biological differences – People with depression appear to have physical changes in their brains. The significance of these changes is still uncertain.
• Brain chemistry – Neurotransmitters are naturally occurring brain chemicals that likely play a role in depression.
• Hormones – Changes in the body’s balance of hormones may be involved in causing or triggering depression.
• Inherited traits – Depression is more common in people whose blood relatives also have this condition. Research shows genes may be involved in causing depression.

Risk factors of depression –:

• Certain personality traits, such as low self-esteem and being too dependent, self-critical, or pessimistic
• Traumatic or stressful events, such as physical or sexual abuse, the death or loss of a loved one, a difficult relationship, or financial problems.
• History of other mental health disorders, such as anxiety disorder, eating disorders, or post-traumatic stress disorder. Abuse of alcohol or recreational drugs.
• Serious or chronic illness, including cancer, stroke, chronic pain, or heart disease. Certain medications may also trigger depression such as some high blood pressure medications or sleeping pills.

Complications in depression – :

• Excess weight or obesity, which can lead to heart disease and diabetes
• Pain or physical illness
• Alcohol or drug misuse
• Anxiety, panic disorder, or social phobia
• Family conflicts, relationship difficulties, and work or school problems
• Social isolation
• Suicidal feelings, suicide attempts, or suicide
• Self-mutation, such as cutting
• Premature death from medical conditions

Prevention of depression -:

There is no fixed way to prevent depression but these strategies may play a major role –
• Take steps to control stress
• Reach out to family and friends
• Get treatment at the earliest sign of a problem
• Consider getting long–term treatment because it helps to prevent a relapse of symptoms.

Types of depressive disorders -:

• Major depressive disorder
• Anxious distress, Melancholy, Agitated (Major depression looks different in different people. So they are characterized into three types.)
• Persistent depressive disorder
• Bipolar disorder
• Seasonal affective disorder (SAD)
• Psychotic disorder
• Peripartum (Postpartum) Depression
• Premenstrual Dysphoric Disorder
• ‘Situational ’Depression
• Atypical depression
• Clinical depression

General issues on Environmental ecology

The environment plays a significant role to support life on earth. But there are some issues that are causing damages to life and the ecosystem of the earth. It is related to the not only environment but with everyone that lives on the planet. Besides, its main source is pollution, global warming, greenhouse gas, and many others. The everyday activities of human are constantly degrading the quality of the environment which ultimately results in the loss of survival condition from the earth.There are hundreds of issue that causing damage to the environment. But in this, we are going to discuss the main causes of environmental issues because they are very dangerous to life and the ecosystem.

Pollution – It is one of the main causes of an environmental issue because it poisons the air, water, soil, and noise. As we know that in the past few decades the numbers of industries have rapidly increased. Moreover, these industries discharge their untreated waste into the water bodies, on soil, and in air. Most of these wastes contain harmful and poisonous materials that spread very easily because of the movement of water bodies and wind. Greenhouse Gases – These are the gases which are responsible for the increase in the temperature of the earth surface. This gases directly relates to air pollution because of the pollution produced by the vehicle and factories which contains a toxic chemical that harms the life and environment of earth. Climate Changes – Due to environmental issue the climate is changing rapidly and things like smog, acid rains are getting common. Also, the number of natural calamities is also increasing and almost every year there is flood, famine, drought, landslides, earthquakes, and many more calamities are increasing.

Development recognises that social, economic and environmental issues are interconnected, and that decisions must incorporate each of these aspects if there are to be good decisions in the longer term.For sustainable development, accurate environment forecasts and warnings with effective information on pollution which are essential for planning and for ensuring safe and environmentally sound socio-economic activities should be made known.


THE EARTH IS WHAT WE
ALL HAVE IN COMMAN

DATA SCIENCE

Introduction:-

Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data.

Data science is a “concept to unify statistics, data analysis, informatics, and their related methods” in order to “understand and analyze actual phenomena” with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, information science, and domain knowledge. However, data science is different from computer science and information science. Turing Award winner Jim Gray imagined data science as a “fourth paradigm” of science (empirical, theoretical, computational, and now data-driven) and asserted that “everything about science is changing because of the impact of information technology” and the data deluge.

A data scientist is someone who creates programming code, and combines it with statistical knowledge to create insights from data.

Foundations:

Data science is an interdisciplinary field focused on extracting knowledge from data sets, which are typically large (see big data), and applying the knowledge and actionable insights from data to solve problems in a wide range of application domains.The field encompasses preparing data for analysis, formulating data science problems, analyzing data, developing data-driven solutions, and presenting findings to inform high-level decisions in a broad range of application domains. As such, it incorporates skills from computer science, statistics, information science, mathematics, information visualization, data integration, graphic design, complex systems, communication and business.Statistician Nathan Yau, drawing on Ben Fry, also links data science to human-computer interaction: users should be able to intuitively control and explore data.In 2015, the American Statistical Association identified database management, statistics and machine learning, and distributed and parallel systems as the three emerging foundational professional communities.

Relationship to statistics :


Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data.Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g. images) and emphasizes prediction and action.Andrew Gelman of Columbia University has described statistics as a nonessential part of data science. Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing, and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data science program. He describes data science as an applied field growing out of traditional statistics.In summary, data science can be therefore described as an applied branch of statistics.

Etymology:

In 1962, John Tukey described a field he called “data analysis“, which resembles modern data science. In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing, C.F. Jeff Wu used the term Data Science for the first time as an alternative name for statistics.Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.

The term “data science” has been traced back to 1974, when Peter Naur proposed it as an alternative name for computer science.In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. However, the definition was still in flux. After the 1985 lecture in the Chinese Academy of Sciences in Beijing, in 1997 C.F. Jeff Wu again suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting, or limited to describing data.In 1998, Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.

During the 1990s, popular terms for the process of finding patterns in datasets (which were increasingly large) included “knowledge discovery” and “data mining”.

Modern usage:

The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. In a 2001 paper, he advocated an expansion of statistics beyond theory into technical areas; because this would significantly change the field, it warranted a new name.”Data science” became more widely used in the next few years: in 2002, the Committee on Data for Science and Technology launched Data Science Journal. In 2003, Columbia University launched The Journal of Data Science. In 2014, the American Statistical Association’s Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.

The professional title of “data scientist” has been attributed to DJ Patil and Jeff Hammerbacher in 2008. Though it was used by the National Science Board in their 2005 report, “Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century,” it referred broadly to any key role in managing a digital data collection.

Market:

Big data is becoming a tool for businesses and companies of all sizes. The availability and interpretation of big data has altered the business models of old industries and enabled the creation of new ones.Data scientists are responsible for breaking down big data into usable information and creating software and algorithms that help companies and organizations determine optimal operations.

The end…

New NASA Earth System Observatory to Help Address, Mitigate Climate Change

May 24, 2021

NASA will design a new set of Earth-focused missions to provide key information to guide efforts related to climate change, disaster mitigation, fighting forest fires, and improving real-time agricultural processes. With the Earth System Observatory, each satellite will be uniquely designed to complement the others, working in tandem to create a 3D, holistic view of Earth, from bedrock to atmosphere.



“I’ve seen firsthand the impact of hurricanes made more intense and destructive by climate change, like Maria and Irma. The Biden-Harris Administration’s response to climate change matches the magnitude of the threat: a whole of government, all hands-on-deck approach to meet this moment,” said NASA Administrator Sen. Bill Nelson. “Over the past three decades, much of what we’ve learned about the Earth’s changing climate is built on NASA satellite observations and research. NASA’s new Earth System Observatory will expand that work, providing the world with an unprecedented understanding of our Earth’s climate system, arming us with next-generation data critical to mitigating climate change, and protecting our communities in the face of natural disasters

DATA SCIENCE

Introduction:-

Data scientists combine mathematics, statistics and the use of computer science to extract,analyze data from thousands of data sources in order to build creative and innovative business solutions.Data Scientist’s job involves solving the problems of his or her client by providing solutions using real time data and tools and algorithms.

Industries and Departments in which Data Scientist are hired:-

Data scientists and analysts are largely employed by IT companies, marketing, finance and retail sectors.
Companies use Data Scientists to give them a report on what their clients demands and needs and give them innovative solutions on how to cater to them. Oil, gas and telecommunication companies also have started employing data scientists to better cater to their clients.
Other sectors and departments that employ data scientists are
● NHS
● Government offices
● Research institutions and universities.

The roles and responsibilities of a data scientist:-

● To handle vast amounts of data and choose reliable sources.

● Developing prediction models and advanced machine learning algorithms

● Verifying data using data investigation and data analysis.

● Using data visualization techniques to present findings.

● Finding solutions to business problems by working with data engineers and data analysts.

Educational qualification For data scientist:-

● Should have a BSc/BA degree in the field of Computer Science/ Software Engineering/Information Science/Mathematics.


● Should have a postgraduate degree/diploma certification in Data Science/Machine Learning.

Career growth of a Data Scientist:-

The life of a Data Scientist starts from an associate data analyst and can go up to the role of Chief Data Scientist.Promotion can take two to five years it is based on the performance.After some experience they get into some higher position.

CONCLUSION:-

Data Scientists are one of the most in demand people in the world. They can skyrocket companies’ shares and make them reach new heights.Data Science is a very high paying industry thus finding a job with a seven-figure salary won’t be a problem. Data Science as an industry has a very bright future.Data Scientists have the ability to change the world’s future.

The Data Industry – A Brief Overview

The data industry is projected to grow by leaps and bounds over the next decade. Massive amounts of data are being generated every day with a quintillion bytes being the safe estimate. Data professionals and statisticians are of paramount requirement in this fast-paced, data-driven world. They perform many tasks ranging from identification of data sources to analysis of data. Additionally, they find trends and patterns in the existing data at hand, however, the real set of duties would depend from organisation to organisation. Since data is relevant in almost every field now, the statistical requirements would also understandably change with the various sectors.

Candidates aspiring to step into this industry would be expected to have a fair knowledge about the statistical software in use, being proficient in one increases the job prospects manifold. It is nevertheless advisable that the potential employees narrow down the types of companies they wish to work for, say, for example, biostatistical organisations, and hone their skills accordingly.

The most popular programming software utilised for statistical analysis is STATA, SAS, R and Python.

STATA

In the words of StataCorp, Stata is “a complete, integrated statistical software package that provides everything you need for data analysis, data management, and graphics”. This software comes in handy while storing and managing large sets of data and is menu-driven. It is available for Windows, Mac and Linux systems. Stata is one of the leading econometric software packages sold in the market today. Such is its importance, that many universities have incorporated this in their coursework to make their students jobs ready. Over 1400 openings posted on Indeed put forward Stata as a precondition for selection. Facebook, Amazon and Mathematica are some of the many companies that require STATA as one of the qualifications for statistical and econometrics related positions.

Python

Being an incredibly versatile programming language, Python is immensely popular. It is accessible for most people as it is easy to learn and write. Organisations ranging from Google to Spotify, all use Python in their development teams. Recently, Python has become synonymous with Data Science. In contrast to other programming languages, such as R, Python excels when it comes to scalability. It is also considerably faster than STATA and is equipped with numerous data science libraries. Python’s growing popularity has in part stemmed from its well-known community. Finding a solution to a challenging problem has never been easier because of its tight-knit community.

SAS

This is a command-driven software package that proves to be useful for statistical analysis as well as data visualization. SAS has been leading the commercial analytics space and provides great technical support. The software is quite expensive, making it beyond reach for many individuals. However, private organisations hold a very large market share of SAS. It is relevant in the corporate world to a large extent.

Educational Qualifications and Online Courses

Employers typically look for statistics, economics, maths, computer science or engineering students for data-related jobs with more preferences given to candidates with post-graduate degree holders. The key skills in demand include proficiency in statistical software, model building and deployment, data preparation, data mining and impeccable analytical skills. People looking to upskill themselves or diversify into a different career path to attain a higher pay bracket should give the data industry a shot. Coursera, Udemy, LinkedIn and various other platforms provide affordable courses in data science, programming and analytics for this purpose. A career in data is a rewarding one, and also ensures maximum job satisfaction. This is a highly recommended profession in today’s time.

Big Data and IoT Explained

How Big Data Influences Your IoT Solution

Technology has been advancing and lives are getting improved everyday now. Businesses are doing everything to exceed the expectations of their customers and IoT is the next promising step towards the same. Internet of Things, IoT in short, is a platform that collects and analyses data from our regular use appliances with the aid of the internet and gives out information to both the manufacturer and user. This information could be about the servicing that is required or a part that has become dysfunctional and needs to be replaced. The huge amount of data that is generated by these sensor equipped machines is called the Big Data (no surprise there, hopefully).

Big Data has always been present but in earlier times, it was simple and could be easily recorded on Excel spreadsheets and analysed as such. The type of data wasn’t as complex back then and it could be easily filled into the cells of spreadsheets. Now, however, the format of data that is being transmitted is not very fixed and it could be in forms like audio, video and pictures. This data cannot be collected and analysed by traditional programs. New softwares (Cloud) are getting developed that can help separate the valuable information and recognise the trends or patterns if there are any. Examples of the same can be seen in apps like Netflix and Amazon Prime when they give you recommendations on the basis of your previous watch.

The big data is characterised by 4 Vs: volume, velocity, variety and veracity. The volume of data could be trillions of gigabytes and has to be stored at multiple locations. The velocity of transmission and collection of data currently, is unprecedented. ‘Variety’ refers to the format of data that can be both structured and unstructured but equally important. Veracity or the accuracy of the set of data that has been generated by a source and needs to be verified in case of redundancy or just to check if the data is suitable for analysis by a particular software. The role of data analyst becomes important with this discussion and it will be the most sought after profession eventually.

IoT and Big Data are very similar but yet very distinct at the same time. IoT seeks to analyse the data when it is transmitted and the data then contributes to the Big Data. A company can use both the technologies at the same time. For example, a sensor in a car could emit a signal that the car is in need of servicing and the owner might get a notification reminder for the same. The data of the times all the cars of the same company required servicing can be stored and used for predictive analysis for newly manufactured cars. 

The integration of the two technologies can help the both consumer and seller in the long run. The businesses will make more profits as they will become more efficient in catering to the needs of their customer and the overall costs will get reduced as well. The customer might be hesitant at first, for the idea of their appliances tracking their usage behavior seems like invasion of privacy, but it will save their time and money in maintenance and replacement.

Geographers and Uses of GIS

By Shashikant Nishant Sharma

Geographers often find it beneficial to understand GIS (Geographic Information System) algorithms, but it’s not always a strict requirement for all geographers. GIS is a powerful tool that allows geographers to analyze and interpret spatial data, and a basic understanding of GIS algorithms can enhance their ability to use GIS effectively. Here are a few reasons why geographers might benefit from understanding GIS algorithms:

  1. Better Use of GIS Software: Understanding the algorithms behind GIS software can help geographers make more informed decisions when choosing and utilizing specific tools. It enables them to select appropriate methods for data analysis and visualization.
  2. Customization and Problem Solving: A deeper understanding of GIS algorithms allows geographers to customize workflows and address specific spatial analysis problems more effectively. This knowledge empowers them to develop solutions tailored to their research or professional needs.
  3. Interpretation of Results: Knowing the algorithms applied in GIS helps geographers interpret the results of spatial analyses more accurately. This understanding allows them to critically evaluate the outcomes and make informed decisions based on a deeper comprehension of the underlying processes.
  4. Integration with Other Technologies: Geographers working at the intersection of GIS and other technologies, such as remote sensing or machine learning, may benefit from understanding the algorithms that drive these technologies. It facilitates integration and synergy between different tools and methods.
  5. Algorithm Development: Some geographers may engage in algorithm development for specific spatial analysis tasks. In such cases, a solid understanding of GIS algorithms is essential for creating effective and efficient solutions.

However, it’s important to note that not all geographers need to delve deeply into GIS algorithms. Many geographers use GIS as a tool for spatial analysis without needing to understand the underlying algorithms at a detailed level. The level of understanding required depends on the specific tasks and goals of the geographer. Some may focus more on the conceptual and applied aspects of GIS, while others, especially those involved in GIS development or research, may need a more in-depth understanding of algorithms.

References

Abler, R. F. (1993). Everything in its place: GPS, GIS, and geography in the 1990s. The Professional Geographer45(2), 131-139.

Goodchild, M. F. (2004). GIScience, geography, form, and process. Annals of the Association of American Geographers94(4), 709-714.

Healy, G., & Walshe, N. (2019). Real-world geographers and GIS. Teaching Geography44(2), 52-55.

Johnston, R. J. (1999). Geography and GIS. Geographical information systems: Principles, techniques, management and applications1, 39-47.

Sharma, S. N. (2019). Review of most used urban growth models. International Journal of Advanced Research in Engineering and Technology (IJARET)10(3), 397-405.