We have implemented a Support Vector Machine (SVM) to predict whether a stock will outperform the market based on its current key statistics and historical financial data. A SVM is a supervised machine learning algorithm which can be used for classification challenges, here we have a set of historical financial data, about 9000 records, where all records have been labelled as outperforming or underperforming the market. This historical data is used to train the SVM where each record is plotted in n-dimensional space, where n is equal to the number of features, here we have 35 features which is the number of key statistics. The SVM then performs the classification by finding the hyperplane that separates the two classes, that is, the hyperplane seperates the stocks that outperformed from underperformed. Once we have trained the SVM with the historical data we can then provide it with current, unclassified data and ask it to predict whether or not a particular stock will outperform the market based on its current key statistcis.
To try and make that a little bit more intuitive, where the number of features (n) is equal to 2 a line can be drawn between the classes to classify the data and where n equals 3 a plane can be used to seperate the classes. But when n is greater than 3 a hyperplane is used to seperate the classes, trying to visualise this hyperplane and higher order dimensions is not possible for us.
The table below is an extract of the historical data, the full csv can be downloaded here Download the CSV
|Ticker||DE Ratio||Trailing P/E||Price/Sales||Price/Book||Profit Margin||Operating Margin||Return on Assets||Return on Equity||Revenue Per Share||Market Cap|
The historical data that will be used to train the SVM are records of S&P 500 companies dating back to about 1998, it's pretty big at about 9000 rows by 35 columns, the critical part here is that we know wether or not the stock was outperforming or underperforming for each record. Before we get in to an actual analysis and prediction based on current data we can split the historical data into train and test sets to help fine tune the parameters for the SVM. In the tuning section below we will split off 1000 records from the historical data to be used as a test set with the remainder to be used as a training set. We then provide the SVM the training set, it builds its model and then we ask it to make predictions on the test set. Your probably thinking but hang on we already know wether or not a stock in the historical data outperformed or underperformed and you are correct, but that label is not provided to the SVM when making a prediction rather it is used to validate the accuracy of the predictions and therefore the parameters we have used.
All the processing is done on a virtual server in the cloud with a free account, so if there are a few people online it could be slow.
You can check out the SVM documentation here scikit-learn or just jump in and set the paramaters for the SVM below.
If you can get a return above 35% you are doing pretty well.
Now you have figured out the parameters that give the highest return leave them set in the Tuning section above. These will be used to train the SVM with the entire historical records data set. Then a set of current data pulled from Yahoo Finance on Mon Feb 6 03:20:25 2017 UTC will be run against the model. To keep the list of stocks somewhere handy the share on Facebook button will post them to your timeline.
You can download the current data that was used here Download the CSV