Indore Investment:Machine Learning for Stock Price Prediction

Machine Learning for Stock Price Prediction

The stock market is known for being volatile, dynamic, and nonlinear. Accurate stock price prediction is extremely challenging because of multiple (macro and micro) factors, such as politics, global economic conditions, unexpected events, a company’s financial performance, and so on.

But all of this also means that there’s a lot of data to find patterns in. So, financial analysts, researchers, and data scientists keep exploring analytics techniques to detect stock market trends. This gave rise to the concept of algorithmic trading, which uses automated, pre-programmed trading strategies to execute orders.

In this article, we’ll be using both traditional quantitative finance methodology and machine learning algorithms to predict stock movements. We’ll go through the following topics:

Disclaimer: this project/article is not intended to provide financial, trading, and investment advice. No warranties are made regarding the accuracy of the models. Audiences should conduct their due diligence before making any investment decisions using the methods or code presented in this article.Indore Investment

When it comes to stocks, fundamental and technical analyses are at opposite ends of the market analysis spectrum.

For our exercise, we’ll be looking at technical analysis solely and focusing on the Simple MA and Exponential MA techniques to predict stock prices. Additionally, we’ll utilize LSTM (Long Short-Term Memory), a deep learning framework for time-series, to build a predictive model and compare its performance against our technical analysis.

As stated in the disclaimer, stock trading strategy is not in the scope of this article. I’ll be using trading/investment terms only to help you better understand the analysis, but this is not financial advice. We’ll be using terms like:

Despite the volatility, stock prices aren’t just randomly generated numbersJaipur Stock. So, they can be analyzed as a sequence of discrete-time data; in other words, time-series observations taken at successive points in time (usually on a daily basis). Time series forecasting (predicting future values based on historical values) applies well to stock forecasting.

Because of the sequential nature of time-series data, we need a way to aggregate this sequence of information. From all the potential techniques, the most intuitive one is MA with the ability to smooth out short-term fluctuationsNew Delhi Wealth Management. We’ll discuss more details in the next section.

For this demonstration exercise, we’ll use the closing prices of Apple’s stock (ticker symbol AAPL) from the past 21 years (1999-11-01 to 2021-07-09). Analysis data will be loaded from Alpha Vantage, which offers a free API for historical and real-time stock market data.

To get data from Alpha Vantage, you need a free API key; a walk-through tutorial can be found here. Don’t want to create an API? No worries, the analysis data is available here as well. If you feel like exploring other stocks, code to download the data is accessible in this Github repo as well. Once you have the API, all you need is the ticker symbol for the particular stock.

For model training, we’ll use the oldest 80% of the data, and save the most recent 20% as the hold-out testing set.

With regard to model training and performance comparison, neptune.ai makes it convenient for users to track everything model-related, including hyper-parameter specification and evaluation plots.

Now, let’s create a project in Neptune for this particular exercise and name it “StockPrediction”.

Since stock prices prediction is essentially a regression problem, the RMSE (Root Mean Squared Error) and MAPE (Mean Absolute Percentage Error %) will be our current model evaluation metrics. Both are useful measures of forecast accuracy.

, where N = the number of time points, At = the actual / true stock price, Ft = the predicted / forecast value.

RMSE gives the differences between predicted and true values, whereas MAPE (%) measures this difference relative to the true values. For example, a MAPE value of 12% indicates that the mean difference between the predicted stock price and the actual stock price is 12%.

Next, let’s create several helper functions for the current exercise.

MA is a popular method to smooth out random movements in the stock market. Similar to a sliding window, an MA is an average that moves along the time scale/periods; older data points get dropped as newer data points are added.

Commonly used periods are 20-day, 50-day, and 200-day MA for short-term, medium-term, and long-term investment respectively.

Two types of MA are most preferred by financial analysts: Simple MA and Exponential MA.

SMA, short for Simple Moving Average, calculates the average of a range of stock (closing) prices over a specific number of periods in that range. The formula for SMA is:

, where Pn = the stock price at time point n, N = the number of time points.

For this exercise of building an SMA model, we’ll use the Python code below to compute the 50-day SMA. We’ll also add a 200-day SMA for good measure.

In our Neptune run, we’ll see the performance metrics on the testing set; RMSE = 43.77, and MAPE = 12.53%. In addition, the trend chart shows the 50-day, 200-day SMA predictions compared with the true stock closing values.

In addition, the trend chart below shows the 50-day, 200-day SMA predictions compared with the true stock closing values.

It’s not surprising to see that the 50-day SMA is a better trend indicator than the 200-day SMA in terms of (short-to-) medium movements. Both indicators, nonetheless, seem to give smaller predictions than the actual values.

Different from SMA, which assigns equal weights to all historical data points, EMA, short for Exponential Moving Average, applies higher weights to recent prices, i.e., tail data points of the 50-day MA in our exampleLucknow Wealth Management. The magnitude of the weighting factor depends on the number of time periods. The formula to calculate EMA is:

where Pt = the price at time point t,

EMAt-1 = EMA at time point t-1,

N = number of time points in EMA,

and weighting factor k = 2/(N+1).

One advantage of the EMA over SMA is that EMA is more responsive to price changes, which makes it useful for short-term trading. Here’s a Python implementation of EMA:

Examining the performance metrics tracked in Neptune, we have RMSE = 36.68, and MAPE = 10.71%, which is an improvement from SMA’s 43.77 and 12.53% for RMSE and MAPE, respectively. The trend chart generated from this EMA model also implies that it outperforms the SMA.

The screenshot below shows a comparison of SMA and EMA side-by-side in Neptune.

Now, let’s move on to the LSTM model. LSTM, short for Long Short-term Memory, is an extremely powerful algorithm for time series. It can capture historical trend patterns, and predict future values with high accuracy.

In a nutshell, the key component to understand an LSTM model is the Cell State (Ct), which represents the internal short-term and long-term memories of a cell.

To control and manage the cell state, an LSTM model contains three gates/layers. It’s worth mentioning that the “gates” here can be treated as filters to let information in (being remembered) or out (being forgotten).

As the name implies, forget gate decides which information to throw away from the current cell state. Mathematically, it applies a sigmoid function to output/returns a value between [0, 1] for each value from the previous cell state (Ct-1); here ‘1’ indicates “completely passing through” whereas ‘0’ indicates “completely filtering out”

It’s used to choose which new information gets added and stored in the current cell state. In this layer, a sigmoid function is implemented to reduce the values in the input vector (it), and then a tanh function squashes each value between [-1, 1] (Ct). Element-by-element matrix multiplication of it and Ct represents new information that needs to be added to the current cell state.

The output gate is implemented to control the output flowing to the next cell state. Similar to the input gate, an output gate applies a sigmoid and then a tanh function to filter out unwanted information, keeping only what we’ve decided to let through.

For a more detailed understanding of LSTM, you can check out this document.

Knowing the theory of LSTM, you must be wondering how it does at predicting real-world stock prices. We’ll find out in the next section, by building an LSTM model and comparing its performance against the two technical analysis models: SMA and EMA.

First, we need to create a Neptune experiment dedicated to LSTM, which includes the specified hyper-parameters.

Next, we scale the input data for LSTM model regulation and split it into train and test sets.

Moving on, let’s kick off the LSTM modeling process. Specifically, we’re building an LSTM with two hidden layers, and a ‘linear’ activation function upon the output. We also use Neptune’s Keras integration to monitor and log model training progress live.

Read more about the integration in the Neptune docs.

Training progress is visible live on Neptune.

Once the training completes, we’ll test the model against our hold-out set.

Time to calculate the performance metrics and log them to Neptune.

In Neptune, it’s amazing to see that our LSTM model achieved an RMSE = 12.58 and MAPE = 2%; a tremendous improvement from the SMA and EMA modelsMumbai Stock Exchange! The trend chart shows a near-perfect overlay of the predicted and actual closing price for our testing set.

We’ve seen the advantage of LSTMs in the example of predicting Apple stock prices compared to traditional MA models. Be careful about making generalizations to other stocks, because, unlike other stationary time series, stock market data is less-to-none seasonal and more chaotic.

In our example, Apple, as one of the biggest tech giants, has not only established a mature business model and management, its sales figures also benefit from the release of innovative products or services. Both contribute to the lower implied volatility of Apple stock, making the predictions relatively easier for the LSTM model in contrast to different, high-volatility stocks.

To account for the chaotic dynamics of the stock market, Echo State Networks (ESN) is proposed. As a new invention within the RNN (Recurrent Neural Networks) family, ESN utilizes a hidden layer with several neurons flowing and loosely interconnected; this hidden layer is referred to as the ‘reservoir’ designed to capture the non-linear history information of input data.

At a high level, an ESN takes in a time-series input vector and maps it to a high-dimensional feature space, i.e. the dynamical reservoir (neurons aren’t connected like a net but rather like a reservoir). Then, at the output layer, a linear activation function is applied to calculate the final predictions.

If you’re interested in learning more about this methodology, check out the original paper by Jaeger and Haas.

In addition, it would be interesting to incorporate sentiment analysis on news and social media regarding the stock market in general, as well as a given stock of interest. Another promising approach for better stock price predictions is the hybrid model, where we add MA predictions as input vectors to the LSTM model. You might want to explore different methodologies, too.

Hope you enjoyed reading this article as much as I enjoyed writing it! The whole Neptune project is available here for your reference.

Simla Wealth Management

More From Author

Jaipur Wealth Management:Complete Guide to Starting a Demat Account for NRIs with Review of Popular Options in India

Udabur Stock:Nippon India ETF Nifty IT