Using Machine Learning to Determine Sky Conditions Above Wheaton
The effective use and protection of Wheaton’s telescope requires that it be covered by its protective hood when not in use and in inclement weather. While the hood could be opened and shut manually, the autonomous operation of the telescope requires that the outside conditions be used to determine the clarity of the sky by way of a computer algorithm. The creation of such an algorithm is the primary focus of my work with Professor Dipankar Maitra in the Physics department. To reach this goal, I employed machine learning to make accurate assessments of sky clarity based on photos of the sky.
A machine learning model can be thought of as a system of weights assigned to certain characteristics of a piece of input data. In the training process, the model is given a set of input data along with the corresponding classification for each input. The model then goes through each image guessing the classification. This prediction is compared to the correct output, and the weights are adjusted accordingly. These adjustments are made using a method called gradient descent, in which the inaccuracy of the prediction, referred to as loss in machine learning, is minimized by moving each weight in the direction of decreased loss. Loss and gradient descent are two critical aspects of machine learning and can be calculated in a number of ways depending on desired effect. After the model has cycled through the data set enough times, known as epochs, the weights will be optimized for accurate classification of the data, and the model can be used to make predictions.
Situated next to Wheaton’s telescope is a camera which takes images of the sky at 3 minute intervals, so these photos were used as the input for my model. To create a model for this data I used TensorFlow, an open source machine learning platform developed by Google. TensorFlow allows for the easy implementation of a technique called transfer learning, in which a pre-trained model is used as a starting point in the training process, allowing for reduced training time and increased accuracy. TensorFlow Hub is an online resource containing numerous pre-trained models from which I pulled a model trained on an image set as a starting point for my own model. This allows my model to be better equipped to recognize feature vectors in the images, and therefore expedites the training process. The model was then trained, using a method of gradient descent known as Adam (derived from adaptive moment estimation), which is frequently employed in similar problems as it is fast and effective even with noisy data. After 10 epochs of training on roughly 1500 images, my model had achieved >95% accuracy in classifying sky conditions as either clear or not clear.
Since successfully creating this model, I have been working to incorporate even more data into the training and prediction processes. Using wheaton’s weather station, I have been working to tie image and weather data together to be used as an input in hopes of increasing accuracy. In sum, this project has been, and will continue to be, a great opportunity for me to expand my knowledge of machine learning and programming, and I am excited to see what the finished product will be.
I would like to thank Wheaton’s Professor Dipankar Maitra, as well as James Synge at Google for the time invested in helping with this project, it has been a great opportunity.
Ben Osborn
Class of 2022
Physics major, Mathematics and Political Science minor
-
Categories:
- Academic Festival
- Mathematics
- Physics
- Political Science