Friday, August 03, 2012

SAS Learning Roadmap — Stage 1 to 6 (Responsive)

SAS Learning Roadmap — Stage 1 to 6 (Responsive)

SAS Learning Roadmap — Stages 1 → 6

A forward-thinking, traditional guide — learn what matters, then touch SAS.

🌱 Stage 1: Foundations

Before touching SAS, ground yourself. In the old way — patient, meticulous — learn these pillars so the language feels like an old friend rather than an alien code.

  • Basic statistics — mean, median, variance, regression, distributions. Know what summary numbers tell you about data.
  • Data structures — tables, rows, columns. Think relationally: each row tells a story; each column holds a truth.
  • Programming logic — variables, loops, conditions. Flow control is the quiet muscle beneath every analysis.

A skeptical question to keep: if a number is telling you a story, who wrote the first draft? Always ask where the data came from.

πŸ“– Stage 2: Learning SAS Basics

Begin with the essentials. Install, open, and befriend SAS Studio or SAS OnDemand for Academics. Learn to run small programs before attempting grand experiments.

Environment & Setup

SAS University Edition has historically been popular; SAS OnDemand is the modern free cloud option. Open SAS Studio, create a new program, run it.

DATA Step — the heart

Used to read, clean, and manipulate data. The DATA step is where rows are born and filtered.

DATA mydata; SET sashelp.class; /* copies an inbuilt dataset */ WHERE age > 12; /* filters */ RUN;

PROC Step — procedures

Procedures analyze or summarize: light, purposeful, and often short.

PROC MEANS DATA=sashelp.class; VAR height weight; RUN;

Input & Output

Reading external files: PROC IMPORT for Excel/CSV, INFILE for text. Export with PROC EXPORT.

πŸ”„ Stage 3: Core Skills

  • Data Cleaning — IF, WHERE, KEEP, DROP, RENAME.
  • Merging & Appending — SET and MERGE statements.
  • Formatting — PROC FORMAT for readable values.
  • Sorting & Summarizing — PROC SORT, PROC FREQ, PROC SUMMARY.

Practice: pick a CSV, clean missing values, rename columns into a tidy naming convention, and save the cleaned dataset.

πŸ“Š Stage 4: Analytics

Now the tools grow teeth. Apply statistical procedures to questions that matter.

  • Regression — PROC REG, PROC GLM.
  • Time series — PROC ARIMA.
  • Logistic regression — PROC LOGISTIC.
  • Survival analysis — PROC LIFETEST.

πŸ› Stage 5: Advanced SAS

  • Macros — automate repetition.
  • PROC SQL — use SQL in SAS for flexible joins and queries.
  • SAS Functions — dates, strings, arrays.
  • Efficiency — indexing, performance tuning.

✨ Stage 6: Best Practices

Tradition meets craftsmanship: comment, document, and format with care.

  • Always comment your code. Explain why, not just what.
  • Readability over terseness — future you will thank present you.
  • Document changes and follow naming conventions.
  • Debugging — learn PUTLOG and how to read the SAS log well.

πŸ“š Recommended Resources

Books & courses that stood the test of time:

  • The Little SAS Book — Lora D. Delwiche & Susan J. Slaughter (classic, beginner-friendly).
  • Learning SAS by Example — Ron Cody.
  • SAS official free courses (SAS OnDemand). Coursera / edX beginner tracks.
  • Practice with built-in datasets: sashelp.class, sashelp.cars.

Formatting, Style & Examples

Below: concise examples showing bad & good. The old maxim holds: clarity beats compactness.

1. Avoid multiple statements on one line

Data Urate_ny; Set Urate_US; if state='NY'; Run;
Data Urate_ny; Set Urate_US; If State='NY'; Run;

2. Formatting lists of variables (SQL in SAS)

Proc sql; CREATE table health_plan_choices as SELECT Company, Job, Health_plan FROM library.occ_source WHERE quarter_begin <= &Mquarter and quarter_end >= &Mquarter; quit;
Proc sql; CREATE table health_plan_choices as SELECT Company, Job, Health_plan FROM library.occ_source WHERE quarter_begin <= &Mquarter and quarter_end >= &Mquarter; quit;

3. Use comments for maintainability

Proc sql; CREATE table health_plan_choices as SELECT Company, Job, Health_plan, Worker_id /* added March 4th by Abigail Hammond */ FROM library.occ_source WHERE quarter_begin <= &Mquarter and quarter_end >= &Mquarter; quit;

Why these practices matter: readability, easier debugging, maintainability, and meeting professional standards.

Quick Exercises

  1. Open SAS Studio and run the PROC MEANS example on sashelp.class.
  2. Create a DATA step that keeps only numeric variables from a CSV and exports a cleaned CSV.
  3. Rewrite a messy PROC SQL statement into the formatted good practice style above.

A final skeptical note: every result invites another question. The method is the map — but do not mistake the map for the land.

Made with care — follow tradition, ask questions, stay curious.

Tuesday, July 31, 2012

x̄ - > ATTACHMENT REPORT AT AMPATH AS FROM MAY 2ND TO JULY 31ST 2012

AMPATH Attachment Report – Zacharia Nyambu

AMPATH Attachment Report – 2012

πŸ“˜ Cover Page

Chepkoilel University College

Flame of Knowledge and Innovation

Title
ATTACHMENT REPORT AT AMPATH AS FROM MAY 2ND TO JULY 31ST 2012
Department Attached at AMPATH
RESEARCH
Name
ZACHARIA NYAMBU
Reg No
SC/153/09
School
SCIENCE
Department
MATHEMATICS AND COMPUTER SCIENCE
Year of Study
THIRD YEAR
Date of Submission
31/7/2012
πŸ™ Acknowledgement

ACKNOWLEDGEMENT

Thanks to AMPATH for giving me this opportunity to be in this organization and its entire staff for making my attachment a remarkable experience. Special appreciation to:

  • Mrs. Jepchirchir Kiplagat: Research Manager
  • Ms. Eunice Gift: Assistant Research Manager
  • Mr. Alfred Koskel: Research Assistant
  • Dr. Ann Mwangi head of biostatistics department
  • Mr. Koech and Mr. Keter: Biostatisticians
  • Mr. Gilbert Simiyu , Ms. Monica Mwaniki and Mr. James Osanya: Data Managers
  • My fellow interns.
✍️ Declaration

DECLARATION

I hereby declare that this attachment report is my work and has not been published or written elsewhere

NAME: ZACHARIA NYAMBU
SIGN:
[Handwritten signature]
DATE: 31/7/2012
SUPERVISOR: ALFRED KOSKEY
SIGN:
[Handwritten signature]
DATE: 31/7/12
Stamp: AIRPORT CENTRE RESEARCH, P.O. Box 46006 - 00100, dated 31 JUL 2012
🎯 Objectives of Attachment

CHAPTER 1

1. INTRODUCTION

OBJECTIVES OF ATTACHMENT

  • To acquire knowledge on data collection, data entry, data cleaning, data coding, data exporting, data quality, data analyzing and data presentation.
  • To learn how to use statistical packages like STATA, SPSS, R and SAS.
  • To learn how to create databases using Microsoft Access and Excel.
  • To learn how to create questionnaires.
  • To acquire knowledge on field and functional activities of a participating organization.
  • To build a curriculum vitae.
  • To acquire knowledge on health data collection.
AMPATH
P.O. BOX 4606, ELDORET, KENYA 30100
TEL: +254532203471/2
FAX: 25453206072
WEB: WWW.AMPATHKENYA.ORG
πŸ“š Abstract

ABSTRACT

This is an attachment report in AMPATH from 2nd May to 31st July 2012. It outlines the scope of the report, including how AMPATH works, its brief history, the duties and responsibilities undertaken, and the new skills and knowledge gained. The abstract highlights the collaboration among different departments to create an academic model aimed at improving the health of the Kenyan population. This is achieved through the identification, development, and timely dissemination of health and healthcare system information for decision-makers in medical care, public health, and public policy in Kenya and other resource-constrained settings.

πŸ”  Acronyms

ACRONYMS

  • AMPATH - Academic Model Providing Access to Healthcare
  • IU - INDIANA UNIVERSITY
  • SPSS - Statistical Package for the Social Science
  • SAS - Statistical Analysis System
  • STATA - Data Analysis and Statistical Software
  • R - Open Source Programming Language for Statistical Computing
  • HAART - Highly Active Antiretroviral Therapy
  • PMTCT - Prevention of Mother to Child Transmission
  • PHI - Haart and Harvest Initiative
  • HCT - HIV Counseling and Testing
  • FPI - Family Preservation Initiative
  • LACE - Legal Aid Centre
  • IFS - Immuno Suppressed
  • AMRS - Academic Medical Recording System
  • DTC - Diagnostic Testing and Counselling
  • PCP - Pneumonia
  • PTB - Pulmonary Tuberculosis
  • CD4 - Cluster Domain
  • NFDA - No Known Food and Drug Allergies
  • HNAN - HIV Associated Nephropathy
  • PHC - Primary Health Care
πŸ₯ COMPANY BRIEF HISTORY

COMPANY BRIEF HISTORY

AMPATH (Academic Model Providing Access to Healthcare) is an academic model for the prevention and treatment of HIV/AIDS. It is a collaboration between Indiana University School of Medicine and Kenyan institutions, offering food, income, and other support to enhance the existing health infrastructure. It monitors antiretroviral treatment and aims to prevent HIV transmission, particularly from mother to child. AMPATH began in November 2001 with two sites and has expanded to 55 sites, with 30 being satellite sites. It treats approximately 130,000 patients and serves over 500,000 individuals through community-based services. The model is resource-conserving and integrated into the broader health system, with a focus on community-based care and support.

🎯 AMPATH MISSION, VISION & PRINCIPLES

AMPATH MISSION

  • To provide and expand sustainable access to high quality care through:
  • Provide excellent healthcare for individuals and population
  • Develop passionate leaders in pharmacy
  • Perform research focused on local needs and global solutions
  • Establish critical healthcare infrastructure and systems

VISION

Innovative, provide and enhance quality care for all people through teamwork and dedication

PRINCIPLES

As an IU Kenya program, the ampath consortium, moi university faculty in health sciences and MTRH the program will seek to:

  • Support and sustain a world class program of high quality research across a broad spectrum ranging from basic sciences to translation and implementation research
  • Enhance all the healthcare institutions engaged in its research
  • Open working groups to any interested faculty of the ampath consortium who are committed to collaborative research
  • Develop human capacity through training and education
  • Create infrastructure that enhances institutional capacity for the conduct of world class research.

πŸ’‘ VALUES

  • Ampath works to embrace
  • • Service with humility
  • • Spirit of collaboration and partnership
  • • Mutual respect and mutual benefit in organization partnership
  • • Focus on vulnerable populations
  • • Efforts to eliminate health disparities.
πŸ§ͺ RESEARCH DEPARTMENT

RESEARCH DEPARTMENT

Conducts clinical and operational research. It entails data collection, data entry, data management, data cleaning, data exporting, data presenting so that health care services can be of high quality.

🧡 SOCIAL SUPPORT PROGRAMS

FOOD DISTRIBUTION

Distribution centers that provide food prescribed by the nutritionist to eligible patients and their dependents. The food comes from the donations and other provisions from the HHI and World food program which supports AMPATH by providing beans, corn and corn-soya blend.

FAMILY PRESERVATION INITIATIVE (FPI)

Helps clients to have a sense of economic security through teaching clients new skills in jewelry, papermaking and tailoring. It does this by providing loans to ampath clients to start business so that they can be financially stable.

PSYCHOSOCIAL SUPPORT

Provide psychosocial support groups to all clients so that they live positively to deal with stress, stigma and adherence. It also provides individual counseling.

SOCIAL WORK

They identify clients difficulties and barriers to help children and others who are vulnerable and disabled. They also work with other units and departments to ensure clients receive the highest quality of care

ORPHANS AND VULNERABLE CHILDREN

This unit offers help to children and their families by providing for schooling school fees, school uniforms and other needs. It also helps to cater for caretakers by counseling them to be better parents.

PREVENTION

Offers prevention services to clients by providing them with services offered outside the hospital such as voluntary counseling and testing, home based counseling and testing, prevention with positives, condom distribution and behavior change communication.

LEGAL AID

Offers legal aid to clients who have legal issues.

PHC

Primary Health Care (PHC) focuses on safe delivery, family planning, immunization and child health (MCH), safe water, implementation of community health strategy.

🧭 AMPATH DEPARTMENTS AND DESCRIPTION

2. AMPATH DEPARTMENTS AND DESCRIPTION

AMPATH has different departments that offer different services but with an objective of establishing critical healthcare infrastructure and systems. The department's are:

CLINICAL SERVICES

This are services offered by the clinic or modules whereby clients are tested again to confirm HIV status. If positive, the client is registered and given an ampath id number and a number of lab investigations are done.

PROVIDER INITIATED TESTING AND COUNSELLING (PICT)

Provides an opportunity to every hospital client to receive free HIV counselling and testing services including with psychological support

HAART PHARMACY

Provides reliable and consistent supply of medications along with appropriate counseling

OUTREACH DEPARTMENT

Clients fill locater forms that help the outreach department locate them to ensure more time for the days, in this situation the client misses their appointments for more than two months. The problem is if outreach department locates the client, he/she is brought back to the clinic and the missed appointments are rescheduled.

NUTRITION SERVICES

This department provides nutritional support to the clients to help them maintain their health and improve their immune system. The department also provides education on proper nutrition and conducts assessments to determine the nutritional status of the clients. In addition, the department provides food supplements to clients who are malnourished or at risk of malnutrition.

PREVENTION DEPARTMENT

This department provides services to reduce risk of HIV infection. It offers education on HIV prevention, condom distribution, voluntary counselling and testing (VCT), and prevention of mother-to-child transmission (PMTCT) services. The department also conducts community outreach programs to raise awareness about HIV prevention.

HEALTH INFORMATION DEPARTMENT (HHI)

Demonstration trainings on different topics are done to the clients. The department also trains clients on income generating activities to help them improve their economic status and provide them with high quality farm produce.

πŸ§‘‍πŸ’» CHAPTER 2 – EXPERIENCES AT AMPATH

3.1 DUTIES AND RESPONSIBILITIES

DATA COLLECTION

Data collection took place in the MTRH ward, divided into four wards (men, women, amenorrhea, and tumor). Data was collected only if the patient was an S or S-exposed child. Information was gathered from the inpatient registration form, including patient name, date of admission, discharge date, chief complaints, impression, discharge diagnosis, orders, and prophylaxis. Data was then entered into AMPATH's system using a questionnaire.

DATA ENTRY

Information collected was entered into the AMPATH database, which includes patient registration and visit details. The database checks for duplicates and provides feedback if a patient has a readmission, showing previous visit information.

DATA EDITING AND RETRIEVAL

If there is missing data, it is retrieved from the ward and re-entered into the AMRS (AMPATH MEDICAL RECORD SYSTEM) using the nurse’s unit station.

ADMINISTRATIVE DUTIES

Duties included printing, scanning of documents, and assisting in the research office.

🧠 4.2 NEW SKILLS LEARNT

  • DATA COLLECTION METHODS
  • DATA ENTRY
  • CREATION OF DATABASE
  • COLLECTING HEALTH DATA
  • CREATION OF QUESTIONNAIRE
  • DATA MANAGEMENT
  • DATA ANALYSIS
  • SCANNING OF DOCUMENTS
  • PRINTING OF DOCUMENTS
  • STATISTICAL PACKAGES LIKE SAS, STATA AND R
  • EPIFINFO AND INFOPATH
  • ADMINISTRATIVE SKILLS
  • WORK ETHICS
  • TERMINOLOGIES IN MEDICAL FIELD

😊 4.3 THINGS ENJOYED

  • Learning statistical packages helping me use what I learned theoretically in statistics
  • Access to internet which could help me search for notes or read on topics when given assignments
  • Interacting with the biostatisticians and data managers who helped me evaluate my career choices
  • Collecting data in the ward and entering it in the AMPATH database

🀝 4.4 RELATIONSHIP WITH STAFF

During my attachment, I had a good relationship with the staff where I interacted with the Biostatisticians who trained me on data analysis and interpretation the data managers who guided me on how to manage data and clean using SAS. I also interacted with Prof. Joseph Hogan, Biostatistician and Dean in brown University who guided me on my future career and opportunities. I learnt on effective communication skills, dress code and office ethics.

πŸŽ“ BENEFITS OF ATTACHMENT

The benefits of this attachment are: learned new statistical packages ethics in a work environment know which career choices to make strengthen my cv what to expect in a job interview

⚠️ CHALLENGES ENCOUNTERED

  • Seeing people die in the ward while collecting data
  • Viruses in the computer
  • The staff were not there on the patients on admission hence not collecting of data of some patients as their status has not been determined.
✅ CONCLUSION

CONCLUSION

In conclusion, to my attachment report this period is very important because I have been able to put in practice what I learned in school. All the skills I learnt such as data collection, coding, data entry, data management and analysis has increased my knowledge in statistics. This attachment has also given me the chance to learn statistical packages like STATA, SAS and R and how to analyze data using them. Other than gaining knowledge, I have developed interpersonal relationship skills in a professional setting.

πŸ’‘ RECOMMENDATIONS

RECOMMENDATIONS

  • TESTING OF PATIENTS ON ADMISSION
  • INSTALLING ANTIVIRUS TO REMOVE VIRUS IN COMPUTER THAT CAUSE LOSS OF INFORMATION
  • INCREASING NUMBER OF COMPUTERS THAT ATTACHEES HAVE ACCESS WITH
  • HAVING A STRICT SCHEDULE OF ATTACHEES ACTIVITIES AT AMPATH FROM THE DAY THEY REPORT.
πŸ“š REFERENCES

REFERENCES

  • AMPATH Training Institute (ATI)
  • AMPATH inpatient registration form
  • AMPATH website
πŸ“Ž APPENDICE

APPENDICE

(Reserved for supporting materials.)

Author: Zacharia Nyambu • Chepkoilel University College • Attachment at AMPATH (May 2 – July 31, 2012)

Wednesday, June 27, 2012

x̄ - > Working with exponents in Calculus

Fantasy Premier League (FPL)

 


In calculus, you often work with functions that involve exponentials. If you want to manipulate these functions using R code, you can do so by defining functions and using basic mathematical operations. Here's some R code that demonstrates common rules of exponents in calculus:


```R

# Define a variable and constants

x <- 2

a <- 3

b <- 2


# Exponentiation rules

# Rule 1: Product of Powers

result1 <- x^a * x^b  # x^(a + b)


# Rule 2: Quotient of Powers

result2 <- x^a / x^b  # x^(a - b)


# Rule 3: Power of a Power

result3 <- (x^a)^b  # x^(a * b)


# Rule 4: Power of a Product

result4 <- (x * a)^b  # (x * a)^b = x^b * a^b


# Rule 5: Power of a Quotient

result5 <- (x / a)^b  # (x / a)^b = x^b / a^b


# Print the results

cat("Rule 1: x^a * x^b =", result1, "\n")

cat("Rule 2: x^a / x^b =", result2, "\n")

cat("Rule 3: (x^a)^b =", result3, "\n")

cat("Rule 4: (x * a)^b =", result4, "\n")

cat("Rule 5: (x / a)^b =", result5, "\n")

```


In this code, we demonstrate the following exponentiation rules:


1. Product of Powers: \(x^a \cdot x^b = x^{a+b}\)

2. Quotient of Powers: \(x^a / x^b = x^{a-b}\)

3. Power of a Power: \((x^a)^b = x^{a \cdot b}\)

4. Power of a Product: \((x \cdot a)^b = x^b \cdot a^b\)

5. Power of a Quotient: \((x / a)^b = x^b / a^b\)


You can change the values of `x`, `a`, and `b` to explore how these rules work with different inputs.

Certainly! Here's an example of a simple calculus proof represented in code. This is a basic proof that the derivative of the function f(x) = x^2 is equal to 2x.


```python

import sympy as sp


# Define the symbolic variable and the function

x = sp.symbols('x')

f_x = x**2


# Calculate the derivative of the function

f_prime_x = sp.diff(f_x, x)


# Simplify the derivative

f_prime_x = sp.simplify(f_prime_x)


# Print the result

print("f'(x) =", f_prime_x)

```


In this code:


1. We import the `sympy` library for symbolic mathematics in Python.

2. We define a symbolic variable `x` and the function `f_x` as `x^2`.

3. We calculate the derivative of `f_x` with respect to `x` using `sp.diff`.

4. We simplify the derivative using `sp.simplify`.

5. Finally, we print the simplified derivative, which should be `2x`, as expected.


This is just a basic example. You can use symbolic math libraries like SymPy or other tools to perform more complex calculus proofs in code.

The chain rule is a fundamental concept in calculus that allows you to find the derivative of a composite function. In R, you can use basic arithmetic operations and functions to apply the chain rule. Here's an example of how you can use R to compute the derivative of a composite function using the chain rule:


Suppose you have a composite function f(g(x)), and you want to find its derivative. You can use the following R code to calculate it:


```R

# Define the functions f(x) and g(x)

f <- function(x) x^2

g <- function(x) 2*x + 1


# Define x and calculate f(g(x))

x <- 3

fg_x <- f(g(x))


# Calculate the derivatives of f(x) and g(x)

df_dx <- function(x) 2*x  # Derivative of f(x)

dg_dx <- function(x) 2    # Derivative of g(x)


# Use the chain rule to calculate df/dx = df/dg * dg/dx

df_dg <- df_dx(g(x))

df_dx_chain_rule <- df_dg * dg_dx(x)


cat("f(g(x)) =", fg_x, "\n")

cat("df/dx =", df_dx(x), "\n")

cat("df/dx (Chain Rule) =", df_dx_chain_rule, "\n")

```


In this code:


1. We define two functions, `f(x)` and `g(x)`, representing the individual functions in the composite function.


2. We specify a value for `x`, which is the point at which we want to calculate the derivative.


3. We calculate `f(g(x))` by first evaluating `g(x)` and then applying `f()` to the result.


4. We define the derivatives of `f(x)` and `g(x)` as separate functions, `df_dx` and `dg_dx`.


5. Using the chain rule, we calculate `df/dx` by multiplying `df/dg` and `dg/dx` at the point `x`.


6. Finally, we print the values of `f(g(x))`, `df/dx`, and `df/dx` calculated using the chain rule.


You can modify this code to work with different functions and values of `x` to compute derivatives for other composite functions.

The product rule is a fundamental concept in calculus that allows you to find the derivative of the product of two functions. In mathematical notation, it's expressed as:


d(uv)/dx = u * dv/dx + v * du/dx


In R, you can calculate the derivative of the product of two functions by defining the functions and then applying the product rule formula. Here's an example of R code that implements the product rule:


```R

# Define two functions u(x) and v(x)

u <- function(x) {

  # Define your first function here, for example, u(x) = x^2

  return(x^2)

}


v <- function(x) {

  # Define your second function here, for example, v(x) = sin(x)

  return(sin(x))

}


# Define the derivative functions

du_dx <- function(x) {

  # Calculate the derivative of u(x)

  return(2 * x)  # Derivative of x^2

}


dv_dx <- function(x) {

  # Calculate the derivative of v(x)

  return(cos(x))  # Derivative of sin(x)

}


# Apply the product rule

product_rule_derivative <- function(x) {

  return(u(x) * dv_dx(x) + v(x) * du_dx(x))

}


# Test the product rule derivative at a specific point, e.g., x = 1

x_value <- 1

result <- product_rule_derivative(x_value)

cat("The derivative of u(x) * v(x) at x =", x_value, "is", result)

```


In this code, you'll need to replace the definitions of the `u` and `v` functions with the functions you want to use in your specific problem. Then, you can call the `product_rule_derivative` function to calculate the derivative of their product at a specific point.

Saturday, June 02, 2012

x̄ - > linearizing a model

An example of an F-distribution calculation in R, F-distribution in R
``` This HTML code includes an embedded R calculation for the F-distribution using the `pf()` function, which calculates the cumulative probability function (CDF). It also creates a Vega-Lite visualization of the F-distribution using JavaScript. The visualization shows the density of the F-distribution for different values of `x`. Samsung


To calculate the rate of return in R, you can use the following code:

```R
# Define the initial investment and final value
initial_investment <- 10000
final_value <- 15000

# Calculate the rate of return
rate_of_return <- (final_value - initial_investment) / initial_investment

# Display the result
cat("Rate of return:", rate_of_return)
```

In this example, we assume an initial investment of $10,000 and a final value of $15,000. The rate of return is calculated by subtracting the initial investment from the final value, dividing it by the initial investment, and expressing it as a decimal.

The result is then printed using `cat()`. You can modify the `initial_investment` and `final_value` variables to match your specific scenario.

To calculate the simple volatility of a series of returns in R, you can use the following code:

```R
# Define the returns series
returns <- c(0.05, 0.02, -0.03, 0.04, -0.01)

# Calculate the mean return
mean_return <- mean(returns)

# Calculate the differences from the mean
differences <- returns - mean_return

# Calculate the squared differences
squared_differences <- differences^2

# Calculate the simple volatility
simple_volatility <- sqrt(mean(squared_differences))

# Display the result
cat("Simple Volatility:", simple_volatility)
```

In this example, we have a series of returns stored in the `returns` vector. We calculate the mean return by using the `mean()` function. Then, we calculate the differences between each return and the mean return. Next, we square the differences to obtain the squared differences. Finally, the simple volatility is computed by taking the square root of the mean of the squared differences.

The result is displayed using `cat()`. You can modify the `returns` vector to include your own series of returns.

Linearizing models involves transforming a nonlinear model into a linear form to facilitate analysis and parameter estimation. This can be achieved through various techniques such as linearization by approximation, logarithmic transformation, or using Taylor series expansion.

Here's an example of linearizing a nonlinear model using the logarithmic transformation:

Suppose we have a nonlinear model of the form:
```
y = a * exp(b * x)
```

To linearize this model, we can take the natural logarithm (log) of both sides:
```
log(y) = log(a) + b * x
```

Now, the transformed model is linear:
```
z = c + d * x
```

where `z = log(y)`, `c = log(a)`, and `d = b`.

In R, you can perform the linearization and estimate the linear model using the logarithmic transformation as follows:

```R
# Sample data
x <- c(1, 2, 3, 4, 5)
y <- c(5, 12, 27, 48, 75)

# Logarithmic transformation
z <- log(y)

# Linear regression
linear_model <- lm(z ~ x)

# Print the linear model coefficients
coefficients <- coef(linear_model)
cat("Intercept (c):", coefficients[1], "\n")
cat("Slope (d):", coefficients[2])
```

In this example, we have `x` and `y` as the input and output variables, respectively. We take the natural logarithm of `y` and store it in `z`. Then, we perform linear regression (`lm()`) with `z` as the response variable and `x` as the predictor variable. The coefficients of the linear model (`c` and `d`) are obtained using `coef()`. Finally, we print the intercept (`c`) and slope (`d`) values.

Note that linearizing a model may introduce additional assumptions or limitations, and it is essential to interpret the results in the context of the transformed variables.

Monday, May 28, 2012

x̄ - > Kurtosis

 Kurtosis is a statistical measure that describes the shape of a probability distribution or the peakedness of a dataset. It quantifies how much of the data is concentrated in the tails of the distribution compared to the center. 


There are different ways to define kurtosis, but the most common definition is based on the fourth standardized moment of a distribution. The standardized moment is calculated by subtracting the mean from each data point, raising the result to the fourth power, and then taking the average of those values. Kurtosis is then obtained by dividing this fourth standardized moment by the square of the standard deviation raised to the fourth power.


Positive kurtosis indicates that the distribution has heavier tails and a sharper peak compared to the normal distribution (also known as leptokurtic distribution). Negative kurtosis, on the other hand, indicates that the distribution has lighter tails and a flatter peak compared to the normal distribution (also known as platykurtic distribution). A kurtosis value of zero means that the distribution has the same shape as the normal distribution (mesokurtic distribution).

NIVEA OFFICIAL STORE


It's important to note that kurtosis alone does not provide information about the specific shape of the distribution. For example, different distributions can have the same kurtosis value. Therefore, it's often used in conjunction with other statistical measures and graphical representations to fully understand the characteristics of a dataset.


# Install and load the moments package

install.packages("moments")

library(moments)


# Create a vector of data

data <- c(1, 2, 3, 4, 5)


# Calculate the kurtosis

kurtosis_value <- kurtosis(data)

print(kurtosis_value)



import numpy as np
from scipy.stats import kurtosis

# Create a numpy array of data
data = np.array([1, 2, 3, 4, 5])

# Calculate the kurtosis
kurtosis_value = kurtosis(data)
print(kurtosis_value)

Saturday, May 12, 2012

x̄ - > Popular and useful R code

Popular and useful R code snippets that have been widely used in the past. Please note that trends in programming languages can change over time, so it's always a good idea to stay up to date with the latest developments and trends in the R programming community.


1. Data Manipulation with dplyr:

   ```R

   library(dplyr)


   # Filter rows based on a condition

   filtered_data <- filter(data, condition)


   # Select specific columns

   selected_data <- select(data, column1, column2)


   # Arrange rows based on a variable

   arranged_data <- arrange(data, variable)


   # Group by a variable and summarize data

   summarized_data <- data %>% group_by(variable) %>% summarise(avg = mean(value))


   # Join two data frames

   merged_data <- inner_join(data1, data2, by = "key_column")

   ```


2. Data Visualization with ggplot2:

   ```R

   library(ggplot2)


   # Create a scatter plot

   ggplot(data, aes(x = x_variable, y = y_variable)) +

     geom_point()


   # Create a bar plot

   ggplot(data, aes(x = x_variable, y = y_variable)) +

     geom_bar(stat = "identity")


    public static void main(String[] args) {

   # Create a line plot

   ggplot(data, aes(x = x_variable, y = y_variable)) +

     geom_line()


   # Add color or fill based on a variable

   ggplot(data, aes(x = x_variable, y = y_variable, color = variable)) +

     geom_point()


   # Facet the plot based on a variable

   ggplot(data, aes(x = x_variable, y = y_variable)) +

     geom_point() +

     facet_wrap(~ variable)

   ```


3. Machine Learning with caret:

   ```R

   library(caret)


   # Split data into training and testing sets

   train_test_split <- createDataPartition(y = data$target_variable, p = 0.7, list = FALSE)

   training_data <- data[train_test_split, ]

   testing_data <- data[-train_test_split, ]


   # Train a linear regression model

   lm_model <- train(target_variable ~ ., data = training_data, method = "lm")


   # Train a random forest model

   rf_model <- train(target_variable ~ ., data = training_data, method = "rf")


   # Make predictions on new data

   lm_predictions <- predict(lm_model, newdata = testing_data)

   rf_predictions <- predict(rf_model, newdata = testing_data)


   # Evaluate model performance

   lm_rmse <- RMSE(lm_predictions, testing_data$target_variable)

   rf_rmse <- RMSE(rf_predictions, testing_data$target_variable)

   ```


These are just a few examples of trending R code snippets, and there are many more techniques and libraries available in the R ecosystem. It's always a good idea to explore and stay updated with the latest developments in the R programming community.


Tuesday, April 24, 2012

x̄ - > Axiom of the Unordered Pair and Axiom Schema

 The Axiom of the Unordered Pair is one of the axioms in set theory, which allows us to form a set containing two given elements without regard to the order in which they are listed. In other words, if we have elements `a` and `b`, then there exists a set containing both `a` and `b`, denoted as `{a, b}`.


In R, we can implement a function that follows the principles of the Axiom of the Unordered Pair using lists. In R, a list is an ordered collection of objects, and we can use it to create an "unordered pair" set by simply putting the two elements into the list.


Here's a simple R function to create an unordered pair:


```r

create_unordered_pair <- function(a, b) {

  return(list(a, b))

}

```


Now, let's test this function:


```r

# Example usage:

pair <- create_unordered_pair("apple", "banana")

print(pair)

```


Output:


```

[[1]]

[1] "apple"


[[2]]

[1] "banana"

```


As you can see, the `create_unordered_pair` function takes two arguments `a` and `b`, and it returns a list with those elements, effectively creating an unordered pair set.


Note: The above R code is a simple demonstration of the Axiom of the Unordered Pair, and it is a basic representation. In formal set theory, the Axiom of the Unordered Pair is stated more rigorously within the axiomatic framework of Zermelo-Fraenkel set theory. The implementation provided here is meant to illustrate the concept in a programming context using R's list data structure.


The Axiom Schema is a set of axioms in set theory that allows us to create new sets based on logical formulas. It allows us to form sets using arbitrary logical conditions. Since the Axiom Schema is a collection of infinitely many axioms, it cannot be implemented directly in R code. However, I can provide an example of how we can use R to demonstrate the concept of generating sets based on a specific logical condition.


Let's consider an example of using R to create a set of even numbers. We will define a function that generates a set containing all even numbers up to a given limit `n`.


```r

create_even_set <- function(n) {

  even_set <- c()

  for (i in 1:n) {

    if (i %% 2 == 0) {

      even_set <- c(even_set, i)

    }

  }

  return(even_set)

}

```


In this example, we define the function `create_even_set` that takes an integer `n` as an argument. The function initializes an empty vector `even_set` and iterates through numbers from 1 to `n`. If the number `i` is even (i.e., when `i %% 2 == 0`), we add it to the `even_set` using the `c()` function.


Let's test the function:


```r

even_numbers_up_to_10 <- create_even_set(10)

print(even_numbers_up_to_10)

```


Output:


```

[1] 2 4 6 8 10

```


The output is a set containing all even numbers from 1 to 10.


Remember, this implementation in R is just a basic example to demonstrate the concept of generating sets based on a logical condition. In formal set theory, the Axiom Schema is stated more rigorously and is not directly implementable in programming languages like R. Instead, it forms the basis of set theory, allowing mathematicians to reason about sets and their properties.


Friday, April 13, 2012

x̄ - > Analysis of health data and optimum nutrition

 


Creating a complete and accurate analysis of health data and optimum nutrition requires extensive domain knowledge, specific data, and well-defined research objectives. However, I can provide a general outline of how you might approach such a project and some example code for basic analysis. Keep in mind that this is just a starting point, and you may need to adapt it based on your specific needs and data.


1. **Data Preparation**: Load and clean your health data, ensuring it's in a suitable format for analysis. You may have data on individuals' health metrics, dietary habits, exercise routines, medical history, etc.


2. **Exploratory Data Analysis (EDA)**: Perform exploratory analysis to understand the distribution of variables, identify missing values, and explore relationships between health metrics and nutrition. Use visualizations like histograms, box plots, and scatter plots.


3. **Statistical Analysis**: Conduct statistical tests and modeling to investigate the relationship between health metrics and nutrition. For example, you can use regression models to explore how different nutrients impact specific health outcomes.


4. **Optimum Nutrition Analysis**: Define what "optimum nutrition" means in your context and create a metric or index to measure it. This might involve assessing the intake of essential nutrients relative to recommended daily allowances or specific health targets.


5. **Data Visualization**: Visualize the relationships between health metrics and nutrition, showcasing how optimal nutrition affects health outcomes. This could involve plotting nutrient intake against health scores or outcomes.


6. **Machine Learning (Optional)**: Consider using machine learning techniques to predict health outcomes based on nutrition data or to identify patterns in the data that contribute to better health.


7. **Recommendations**: Based on your analysis, provide recommendations on how individuals can optimize their nutrition to improve specific health metrics.


Here's an example R code snippet for a simple correlation analysis between health metrics and nutrition using a synthetic dataset:


```R

# Load necessary libraries

library(ggplot2)


# Create a sample dataset

set.seed(42)  # Setting seed for reproducibility


n <- 100  # Number of individuals

blood_pressure <- rnorm(n, mean = 120, sd = 10)

calories_intake <- rnorm(n, mean = 2000, sd = 500)

vitamin_c <- rnorm(n, mean = 100, sd = 20)


# Create a data frame

health_data <- data.frame(Blood_Pressure = blood_pressure,

                          Calories_Intake = calories_intake,

                          Vitamin_C_Intake = vitamin_c)


# Correlation analysis

correlation_matrix <- cor(health_data)

print(correlation_matrix)


# Scatter plot of Blood Pressure vs. Calories Intake

ggplot(health_data, aes(x = Calories_Intake, y = Blood_Pressure)) +

  geom_point() +

  labs(x = "Calories Intake", y = "Blood Pressure",

       title = "Scatter Plot of Blood Pressure vs. Calories Intake")


# Scatter plot of Blood Pressure vs. Vitamin C Intake

ggplot(health_data, aes(x = Vitamin_C_Intake, y = Blood_Pressure)) +

  geom_point() +

  labs(x = "Vitamin C Intake", y = "Blood Pressure",

       title = "Scatter Plot of Blood Pressure vs. Vitamin C Intake")

```


Again, remember that this is just a basic example for demonstration purposes. In a real-world project, you would use actual health data and implement more advanced analyses and models to draw meaningful conclusions about optimum nutrition and health outcomes.

Creative Commons License

Saturday, March 24, 2012

x̄ - > Paired t-test on blood pressure data for a group


Example of R code to perform a paired t-test on blood pressure data for a group of patients before and after a treatment:


```R

# Create a sample dataset with blood pressure measurements

before_treatment <- c(120, 130, 140, 115, 135)

after_treatment <- c(110, 125, 130, 112, 130)


# Combine the data into a data frame

blood_pressure_data <- data.frame(Before = before_treatment, After = after_treatment)


# Conduct a paired t-test

result <- t.test(blood_pressure_data$Before, blood_pressure_data$After, paired = TRUE)


# Print the t-test results

cat("Paired t-test results:\n")

cat("t-statistic =", result$statistic, "\n")

cat("p-value =", result$p.value, "\n")

cat("Degrees of freedom =", result$parameter, "\n")

```


In this example, we first create two vectors `before_treatment` and `after_treatment` containing blood pressure measurements before and after the treatment, respectively. Then, we combine the data into a data frame `blood_pressure_data`.


Next, we conduct a paired t-test using the `t.test` function with the `paired = TRUE` argument, which specifies that the data are paired (before and after measurements from the same patients). The function returns a list of results, including the t-statistic, p-value, and degrees of freedom.


Finally, we print out the t-test results using the `cat` function.


Keep in mind that this is a simple example with synthetic data. In real-world scenarios, you would replace the `before_treatment` and `after_treatment` vectors with your actual data from your health project to perform the t-test on your dataset. Also, consider performing data preparation and validation steps before conducting statistical analyses on actual health data.


/* Read the data into SAS */

data health_data;

input Patient_ID Blood_Pressure_Before Blood_Pressure_After;

datalines;

1 120 110

2 130 125

3 140 130

4 115 112

5 135 130

;

run;


/* Descriptive Statistics */

proc means data=health_data n mean std;

  var Blood_Pressure_Before Blood_Pressure_After;

run;


/* Paired t-test */

proc ttest data=health_data;

  paired Blood_Pressure_Before*Blood_Pressure_After;

run;


To provide you with a sample R code for a health data project related to blood pressure and nutritional needs, I'll assume we want to analyze the relationship between blood pressure measurements and nutritional intake for a group of individuals. In this example, we'll generate some synthetic data to demonstrate the process.

Let's create a dataset with columns for "Blood_Pressure," "Calories_Intake," and "Sodium_Intake." We'll then perform some basic analyses, including scatter plots and correlation calculations.

```R
# Load necessary libraries
library(ggplot2)

# Create a sample dataset
set.seed(42)  # Setting seed for reproducibility

# Number of individuals in the dataset
n <- 100

# Generating synthetic data for blood pressure, calories intake, and sodium intake
blood_pressure <- rnorm(n, mean = 120, sd = 10)
calories_intake <- rnorm(n, mean = 2000, sd = 500)
sodium_intake <- rnorm(n, mean = 2000, sd = 500)

# Creating a data frame
health_data <- data.frame(Blood_Pressure = blood_pressure,
                          Calories_Intake = calories_intake,
                          Sodium_Intake = sodium_intake)

# Scatter plot of Blood Pressure vs. Calories Intake
ggplot(health_data, aes(x = Calories_Intake, y = Blood_Pressure)) +
  geom_point() +
  labs(x = "Calories Intake", y = "Blood Pressure",
       title = "Scatter Plot of Blood Pressure vs. Calories Intake")

# Scatter plot of Blood Pressure vs. Sodium Intake
ggplot(health_data, aes(x = Sodium_Intake, y = Blood_Pressure)) +
  geom_point() +
  labs(x = "Sodium Intake", y = "Blood Pressure",
       title = "Scatter Plot of Blood Pressure vs. Sodium Intake")

# Correlation analysis
correlation_matrix <- cor(health_data[, c("Blood_Pressure", "Calories_Intake", "Sodium_Intake")])
print(correlation_matrix)
```

In this example, we generate synthetic data for "Blood_Pressure," "Calories_Intake," and "Sodium_Intake" using the `rnorm` function, which creates random samples from a normal distribution. We then create a data frame `health_data` to store this data.

We then create two scatter plots using the `ggplot2` library to visualize the relationship between "Blood_Pressure" and "Calories_Intake" and "Sodium_Intake," respectively.

Finally, we calculate the correlation matrix using the `cor` function to examine the correlation between the variables.

Note that this is a simple demonstration using synthetic data. In a real-world health data project, you would use actual data collected from individuals, and you might need to perform more advanced statistical analyses, such as regression modeling or hypothesis testing, based on the specific research question and objectives of the project.
Creative Commons License

Wednesday, March 14, 2012

x̄ - > Investment Finance using alphas an R code illustration

The Axiom of the Power Set and Sum Set

Investment Finance

 It seems like you're asking for R code related to alphas in finance. In finance, "alpha" refers to a measure of an investment's performance compared to a benchmark index, after accounting for the risk taken. It indicates how much the investment outperforms or underperforms the benchmark.


Here's an example of calculating and interpreting the alpha of a portfolio using R, assuming you have a dataset with portfolio returns and benchmark returns:


```R

# Load necessary libraries

library(PerformanceAnalytics)


# Example portfolio returns and benchmark returns

portfolio_returns <- c(0.02, 0.03, 0.01, 0.04, 0.02)

benchmark_returns <- c(0.01, 0.02, 0.02, 0.03, 0.03)


# Calculate excess returns (portfolio - benchmark)

excess_returns <- portfolio_returns - benchmark_returns


# Calculate the alpha using the CAPM model

alpha <- CAPM.jensenAlpha(excess_returns, benchmark_returns)


# Print the alpha

cat("Portfolio Alpha:", alpha, "\n")


# Interpretation of alpha

if (alpha > 0) {

  cat("The portfolio has a positive alpha, indicating it has outperformed the benchmark.\n")

} else if (alpha < 0) {

  cat("The portfolio has a negative alpha, indicating it has underperformed the benchmark.\n")

} else {

  cat("The portfolio has a zero alpha, indicating it has performed in line with the benchmark.\n")

}

```


This code uses the `PerformanceAnalytics` package to calculate the Jensen's Alpha (a type of alpha) of a portfolio. It calculates the excess returns by subtracting the benchmark returns from the portfolio returns and then calculates the alpha using the Capital Asset Pricing Model (CAPM) approach. Finally, it interprets the alpha based on whether it's positive, negative, or zero.


Please note that this is a simplified example, and in practice, the calculation and interpretation of alpha can be more complex and may involve various risk factors, regression analysis, and other considerations. Always make sure to adapt the code and calculations to your specific use case and data.

Wednesday, February 29, 2012

x̄ - > The cumulative mean (or cumulative average) and Moving Average

Updated on 5/18/2024 Jumia banner missing link.


In R, you can calculate the cumulative mean of a vector or a sequence of numbers using various methods. One way is to use a loop to calculate the cumulative sum and divide it by the cumulative count at each step. Alternatively, you can use the `cumsum()` function to compute the cumulative sum and then calculate the cumulative mean directly.

Let's illustrate both methods with some code examples:

Method 1: Using a loop to calculate the cumulative mean.

```R
# Sample data vector
data_vector <- c(10, 20, 30, 40, 50)

# Empty vector to store cumulative means
cumulative_means <- numeric(length(data_vector))

# Calculate cumulative mean using a loop
for (i in 1:length(data_vector)) {
  cumulative_means[i] <- mean(data_vector[1:i])
}

# Print the result
print(cumulative_means)
```

Method 2: Using the `cumsum()` function to calculate the cumulative mean.

```R
# Sample data vector
data_vector <- c(10, 20, 30, 40, 50)

# Calculate cumulative sum
cumulative_sum <- cumsum(data_vector)

# Calculate cumulative mean
cumulative_means <- cumulative_sum / seq_along(data_vector)

# Print the result
print(cumulative_means)
```

Both methods will produce the same output:

```
[1] 10 15 20 25 30
```

In the output, you can see that the first element of the `cumulative_means` vector is the same as the first element of the `data_vector`, the second element is the mean of the first two elements of `data_vector`, the third element is the mean of the first three elements of `data_vector`, and so on. This is the cumulative mean of the data_vector at each step.


Both cumulative mean and moving average are methods used to smooth data and identify trends over time. However, they are different in their calculations and applications.

1. Cumulative Mean:
The cumulative mean (or cumulative average) is a measure of the arithmetic mean of a sequence of numbers up to a given point. It provides the average value of the data accumulated so far. As we illustrated earlier, it is calculated by dividing the cumulative sum by the number of data points up to that position.

2. Moving Average:
The moving average (or rolling average) is a technique used to analyze data points by creating averages of subsets of the entire dataset. It is particularly useful for identifying trends or patterns in time series data. Moving averages are often used to smooth out fluctuations and highlight long-term trends.

Let's illustrate the difference between cumulative mean and moving average using R code:

```R
# Sample data vector
data_vector <- c(10, 20, 30, 40, 50)

# Calculate cumulative mean using a loop
cumulative_means <- numeric(length(data_vector))
for (i in 1:length(data_vector)) {
  cumulative_means[i] <- mean(data_vector[1:i])
}

# Calculate moving average using the 'rollmean' function from the 'zoo' package
library(zoo)
window_size <- 3
moving_averages <- rollmean(data_vector, k = window_size, align = "right", fill = NA)

# Print the results
print("Cumulative Mean:")
print(cumulative_means)

print("Moving Average:")
print(moving_averages)
```

Output:

```
[1] "Cumulative Mean:"
[1] 10 15 20 25 30

[1] "Moving Average:"
[1] NA 20 30 40 47
```

In the output, you can see the difference between cumulative mean and moving average:

- Cumulative mean: At each position, the value represents the average of all the elements up to that point.
- Moving average: At each position, the value represents the average of a window of data points, where the window size is specified by `window_size`. The `NA` in the first position is because there are not enough data points to calculate the moving average with a window of size 3 at the beginning of the data.

In summary, cumulative mean provides the average of all data points up to a given position, while moving average computes the average of a subset of data points within a specified window. The choice between these methods depends on the specific analysis and trend detection requirements.


Meet the Authors
Zacharia Maganga’s blog features multiple contributors with clear activity status.
Active ✔
πŸ§‘‍πŸ’»
Zacharia Maganga
Lead Author
Active ✔
πŸ‘©‍πŸ’»
Linda Bahati
Co‑Author
Active ✔
πŸ‘¨‍πŸ’»
Jefferson Mwangolo
Co‑Author
Inactive ✖
πŸ‘©‍πŸŽ“
Florence Wavinya
Guest Author
Inactive ✖
πŸ‘©‍πŸŽ“
Esther Njeri
Guest Author
Inactive ✖
πŸ‘©‍πŸŽ“
Clemence Mwangolo
Guest Author

x̄ - > Bloomberg BS Model - King James Rodriguez Brazil 2014

Bloomberg BS Model - King James Rodriguez Brazil 2014 πŸ”Š Read ⏸ Pause ▶ Resume ⏹ Stop ⚽ The Silent Kin...

Labels

Data (3) Infographics (3) Mathematics (3) Sociology (3) Algebraic structure (2) Environment (2) Machine Learning (2) Sociology of Religion and Sexuality (2) kuku (2) #Mbele na Biz (1) #StopTheSpread (1) #stillamother #wantedchoosenplanned #bereavedmothersday #mothersday (1) #university#ai#mathematics#innovation#education#education #research#elearning #edtech (1) ( Migai Winter 2011) (1) 8-4-4 (1) AI Bubble (1) Accrual Accounting (1) Agriculture (1) Algebra (1) Algorithms (1) Amusement of mathematics (1) Analysis GDP VS employment growth (1) Analysis report (1) Animal Health (1) Applied AI Lab (1) Arithmetic operations (1) Black-Scholes (1) Bleu Ranger FC (1) Blockchain (1) CATS (1) CBC (1) Capital markets (1) Cash Accounting (1) Cauchy integral theorem (1) Coding theory. (1) Computer Science (1) Computer vision (1) Creative Commons (1) Cryptocurrency (1) Cryptography (1) Currencies (1) DISC (1) Data Analysis (1) Data Science (1) Decision-Making (1) Differential Equations (1) Economic Indicators (1) Economics (1) Education (1) Experimental design and sampling (1) Financial Data (1) Financial markets (1) Finite fields (1) Fractals (1) Free MCBoot (1) Funds (1) Future stock price (1) Galois fields (1) Game (1) Grants (1) Health (1) Hedging my bet (1) Holormophic (1) IS–LM (1) Indices (1) Infinite (1) Investment (1) KCSE (1) KJSE (1) Kapital Inteligence (1) Kenya education (1) Latex (1) Law (1) Limit (1) Logic (1) MBTI (1) Market Analysis. (1) Market pulse (1) Mathematical insights (1) Moby dick; ot The Whale (1) Montecarlo simulation (1) Motorcycle Taxi Rides (1) Mural (1) Nature Shape (1) Observed paterns (1) Olympiad (1) Open PS2 Loader (1) Outta Pharaoh hand (1) Physics (1) Predictions (1) Programing (1) Proof (1) Python Code (1) Quiz (1) Quotation (1) R programming (1) RAG (1) RL (1) Remove Duplicate Rows (1) Remove Rows with Missing Values (1) Replace Missing Values with Another Value (1) Risk Management (1) Safety (1) Science (1) Scientific method (1) Semantics (1) Statistical Modelling (1) Stochastic (1) Stock Markets (1) Stock price dynamics (1) Stock-Price (1) Stocks (1) Survey (1) Sustainable Agriculture (1) Symbols (1) Syntax (1) Taroch Coalition (1) The Nature of Mathematics (1) The safe way of science (1) Travel (1) Troubleshoting (1) Tsavo National park (1) Volatility (1) World time (1) Youtube Videos (1) analysis (1) and Belbin Insights (1) competency-based curriculum (1) conformal maps. (1) decisions (1) over-the-counter (OTC) markets (1) pedagogy (1) pi (1) power series (1) residues (1) stock exchange (1) uplifted (1)

Followers