Skip Navigation Links.
Collapse <span class="m110 colortj mt20 fontw700">Volume 12 (2024)</span>Volume 12 (2024)
Collapse <span class="m110 colortj mt20 fontw700">Volume 11 (2023)</span>Volume 11 (2023)
Collapse <span class="m110 colortj mt20 fontw700">Volume 10 (2022)</span>Volume 10 (2022)
Collapse <span class="m110 colortj mt20 fontw700">Volume 9 (2021)</span>Volume 9 (2021)
Collapse <span class="m110 colortj mt20 fontw700">Volume 8 (2020)</span>Volume 8 (2020)
Collapse <span class="m110 colortj mt20 fontw700">Volume 7 (2019)</span>Volume 7 (2019)
Collapse <span class="m110 colortj mt20 fontw700">Volume 6 (2018)</span>Volume 6 (2018)
Collapse <span class="m110 colortj mt20 fontw700">Volume 5 (2017)</span>Volume 5 (2017)
Collapse <span class="m110 colortj mt20 fontw700">Volume 4 (2016)</span>Volume 4 (2016)
Collapse <span class="m110 colortj mt20 fontw700">Volume 3 (2015)</span>Volume 3 (2015)
Collapse <span class="m110 colortj mt20 fontw700">Volume 2 (2014)</span>Volume 2 (2014)
Collapse <span class="m110 colortj mt20 fontw700">Volume 1 (2013)</span>Volume 1 (2013)

Volume 9, Issue 3

Solution to Collatz Conjecture
Original Research
Collatz Conjecture, one of the unsolved problems in mathematics is that for any positive integer, the positive integer is multiplied by 3 and 1 is added if odd, divided by 2 if even. This process is repeated, and the sequence of numbers finally reaches 1. Collatz Conjecture is notoriously escaped all attempted proofs. This paper presents a solution to Collatz Conjecture with a statistical and logical/ mathematical proof. The article demonstrates why Collatz function cannot enter an iterative infinite loop and the function will reach 1 for all positive integers.
American Journal of Applied Mathematics and Statistics. 2021, 9(3), 107-110. DOI: 10.12691/ajams-9-3-5
Pub. Date: October 12, 2021
2975 Views3 Downloads
On Evaluating the Volatility of Nigerian Gross Domestic Product Using Smooth Transition Autoregressive-GARCH (STAR - GARCH) Models
Original Research
STAR-GARCH models are hybrid models that combine the functional form of smooth transition autoregressive models and Generalized autoregressive conditional heteroscedasticity models. The two classes of STAR models considered in this paper are Exponential and Logistic Smooth transition autoregressive models (ESTAR and LSTAR). The functional form of each of this was combined with that of GARCH model and the resulting models becomes ESTAR-GARCH and LSTAR-GARCH models. The derived equations were applied to Nigerian gross domestic product (Real estate) for empirical illustration. Statonarity tests (Unit root test Graphical and correlogrom methods) conducted revealed that the series was stationary at Second difference. The hybrid models equations so derived were used to determine the model that performed better using the information criteria (AIC, SIC and HQIC), variances obtained from the data, performance measure indices (RMSE, MAE, MAPE THEIL U, Bias proportion, variance Bias proportion and covariance Bias proportion) analysis and in - sample forecast accuracy for the models. From all the criteria used it was observed that the duo of LSTAR-GARCH and ESTAR-GARCH models performed far better than classical GARCH model. However, LSTAR-GARCH performs slightly better than ESTAR-GARCH. From these results it is evident that volatility in Nigerian gross domestic product (Real estate) is best captured using Logistic smooth transition GARCH (LSTAR-GARCH) models, it is therefore, recommended for would be forecasters, investors and other end users to make use of LSTAR-GARCH models.
American Journal of Applied Mathematics and Statistics. 2021, 9(3), 102-106. DOI: 10.12691/ajams-9-3-4
Pub. Date: October 11, 2021
2039 Views2 Downloads
Normal-Power Function Distribution with Logistic Quantile Function: Properties and Application
Original Research
Developing compound probability distributions is very important in the field of probability and statistics because there are different datasets from different fields with different features. These features range from high skewness, peakedness (kurtosis), bimodality, highly dispersed, and so on. Existing distributions might not easily fit well to these emerging data of interest. So, there is a need to develop more robust and flexible distributions that are positively skewed, negatively skewed, and bathup shape, to handle some of these features in the emerging data of interest. This paper, therefore, proposed a new four-parameter distribution called the Normal-Power{logistic} distribution. The proposed distribution was characterized by its density, distribution, survival, hazard, cumulative hazard, reversed hazard, and quantile functions. Properties such as the r-th moment, heavy tail property, stochastic ordering, mean inactive time were obtained. A useful transformation of the proposed distribution to normal distribution was shown to help generate its quantiles. The method of Maximum Likelihood Estimation (MLE) was used to estimate the model parameters. A simulation study was carried out to test the consistency of the maximum likelihood parameter estimates. The result of the simulation shows that the biases reduce as the sample size increases for different parameter values. The importance of the new distribution was proved empirically using a real-life dataset of gauge lengths of 10mm. The proposed distribution was compared with five other competing distributions, and the results show that the proposed Normal-Power{logistic} distribution (NPLD) performed favourably than the other five distributions using the AIC, CAIC, BIC, HQIC criteria.
American Journal of Applied Mathematics and Statistics. 2021, 9(3), 90-101. DOI: 10.12691/ajams-9-3-3
Pub. Date: September 17, 2021
2902 Views2 Downloads
Assessing Implicit Causes of Fast-Food Demand Fluctuation Through Facilitating an Exploratory Factor Analysis
Original Research
The demand for food especially in the fast-food sector has changed remarkably over the last decade among Bangladeshi consumers due to its economic growth. So, demand variation is a key issue among businessmen to satisfy the customers’ demand. To understand and keep track of the challenges, this study investigates the underlying significant causes of demand variation. To perform the research, a total of 333 respondents were interviewed with a 14-item structured questionnaire. In this study, Kaiser-Meyer-Olkin's and Bartlett’s test of Sphericity are used to measure of sampling adequacy and assess the factorability of the data, respectively along with calculating Cronbach’s alpha and composite reliability. The outcomes of exploratory factor analysis extracted the most significant 11-item and ranked them into 4 different structured factors. Besides, the findings of this study could assist practitioners to become more competitive in the current business practices as demands for fast food products vary daily.
American Journal of Applied Mathematics and Statistics. 2021, 9(3), 83-89. DOI: 10.12691/ajams-9-3-2
Pub. Date: August 04, 2021
4144 Views17 Downloads
Modelling a Multilevel Data Structure Using a Composite Index
Original Research
When modelling complexed data structures related to a certain social aspect, there could be various hierarchical levels where data units are nested within each other. There could also be several variables in each level, and those variables may not be unique for each case or record, making the data structure even more complexed. Multilevel modelling has been used for decades, to handle such data structures, but may not be effective at all times to capture the structure fully, due to the extent of complexities of the data structure and the inherent issues of the procedure. On the contrary, ignoring the multilevel data structure when modelling, can lead to incorrect estimations and thereby may not achieve acceptable accuracies from the model. This research explains a simple approach where a complexed multilevel structure is compressed to a single level by combining higher level variables to form a composite index. Moreover, this composite index, also reduces the number of variables considered in the entire modelling process, substantially. The process is exemplified, using a primary data set gathered on household education expenditure using a systematic sampling survey. Several variables are collected on each household and another set of variables relating to each school going child in the household, creating a multilevel data structure. The composite index, named as, ¡°Household Level Education Index¡± is developed through a factor analysis where the detailed process of its construction is explained. The LASSO regression was performed to illustrate the use of the proposed composite index by predicting the monthly household education expenditure through a single level regression model. Finally, a Random Forest model was used to examine the feature importance, where the proposed composite index ¡°Household level education index¡± was the most important feature in predicting the monthly household educational expenditure.
American Journal of Applied Mathematics and Statistics. 2021, 9(3), 75-82. DOI: 10.12691/ajams-9-3-1
Pub. Date: July 23, 2021
2484 Views4 Downloads