Code With Lucy | A Machine Learning Course Reflection Winter 2019/2020

Code With Lucy | A Machine Learning Course Reflection Winter 2019/2020


Hello, my name is Lucy Mo and I’m a grade
11 student in South Surrey, BC, who has a really strong passion for coding and AI. So a few weeks ago I started my own course
called Code with Lucy, which is an introductory AI course with the goal of spreading the knowledge
of AI and promoting youth in STEM. The course is completely non for profit. So since this is the first iteration of my
class, I had a total of 5 students. The class was composed of 3 grade 7 students
as well as 2 grade 10 students. Initially the class was planned to be 6 weeks
long, however the debugging proved to be a lot more complicated so we extended that to
10 weeks. The lessons that I taught in this course can
be categorized into 3 main categories: the deep learning part, the convolutional model
part, as well as the hand on building part. So in the deep learning part we learned about
the basic architecture of machine learning as well as a lot of introductory basic things. So for the architecture we learned about forwards
and backwards propagation, activation functions, loss and cost functions, the update rule,
and we also did a lot of class projects about how to improve your model, so this would be
in the form of research and share. For example, we would learn things like minibatches,
regularization, optimizers, drop out, and one-hotted vectors, and each student would
research one component of it and share it with the class. Other things we did with this form of project
is how hyperparameters can improve your model, and since the course is taught in python,
we also explored a lot of libraries such as tensorflow to help them build the code.Throughout
the course there were also a lot of brief math lessons and since machine learning is
based on math, it is really important that we establish some of the features of these
equations and how we use math to perform all these calculations that can result in, for
example, the last number for binary classification. So for the convolutional model part we learned
about 2 different layers: the convolution layer and the pooling layer. We also learned about how these different
layers, working together, can identify different features of an image for image classification,
which is our final goal. And with that in mind, we headed on to building
our own model and the end goal was to create a binary classifier for the 2 hand motions:1
and 5, which I have already demonstrated in one of my previous videos and I’ll link that
right up here. Along with this, building the model and stuff,
we also wanted to deploy it on a raspberry pi, which is one of these raspberry pi 4s
right here. So the students each got a raspberry pi and
experimented it at home before coming over to the lesson, and we played around with ssh,
scp, and vnc viewers and we actually did struggle a lot with connecting the raspberry pi’s to
multiple wifis, but that was something that we fixed later on. As we were very short on time and the raspberry
pi was kind of hard to configure for different os systems, not every student ended up deploying
their model on a raspberry pi, which is unfortunate, but a few of them did successfully this. They were able to classify their image based
on using one of these little devices. So my overall experience with the course was
a really good experience because I personally enjoy teaching people a lot and sharing my
knowledge and the group of kids that I taught, they were very passionate about coding, so
it wasn’t like I was teaching some people who didn’t want to be there at all, for example
forced by their parents. I feel like the passion towards coding is
what drives them to be, or what drives them to do what they want to do, and that is something
that I really value in my students. And I know that programming and STEM is an
expanding field right now and I know that around my community at least, there are many
programs for teaching little kids to program and, basically very introductory programs
such as learning the language python, or learning from scratch, or even just learning basic
javascript functions, but feel like since there are so many of those programs, I wanted
to go for something different, so I felt like I needed to target those kids who already
had a bit of experience but want to expand, maybe in like a different area, so I chose
to do this AI course, as well as me personally I really enjoy it, so I really liked teaching
this class. Next here are some videos about my students
and their experiences, as well as their final projects. My name is Jerry. I enjoyed the class based on learning python,
and not just learning python, but dealing with macbook’s terminals is also something
I learned from Lucy’s class. I want to keep working on my model to increase
the number of objects it can process and in the means that it can detect objects’ accuracy
and frequency, as well as placing the object in a certain category. For example, when an apple is found, the computer
will recognize it and place it in the food cateogry. My name is Kevin Weng. I’m in grade 7, and I am a part of the introductory
AI course code with Lucy. In this course we learned about neural networks
and how they function. We even made one ourselves using the programming
language python. I’ve had experience with python before, but
I have never made something as complicated as neural networks. Our goal was to make a program that can differentiate
between the hand gestures 1 and 5. Programming my own model was hard and tedious,
and I ran into errors every step of the way, but Lucy was helpful and kind to me, and she
helped me through all of it. Having a community to work with is really
helpful, and it helped me learn a lot. My classmates made this process much more
enjoyable than it would have been if I started it on my own. I really enjoyed this class and I had a lot
of fun about neural networks. Alright it basically takes the most likely
one and then outputs it while using the np argsmax command but there is a bit of calculating. Overfeeding and underfeeding is when the AI
gets too focused on its training dataset so that it actually doesn’t perform well in real
world. I had to put my finger over here and that’s
already a bit of overfeeding. And also we need to provide a white background
for the background as well. Underfitting is when you don’t give enough
images and it classifies everything into one and that’s a bit of when you get 100% accuracy. Overfeeding is a bit more of a worrisome. Underfitting you can just easily solve by
putting in more images and then classification. And also maybe increasing the number of layers
and increasing the model capacity. Pretty good, Lucy you taught everyone a lot
of stuff. You even have a special descriptions and lecture
about underfitting and those are very important concepts. Some of the difficulties that I encountered
with this class was definitely debugging on different os systems, because I personally
use the mac os and I’ve never had coding experience on a windows os, and a few of my students
did end up using windows os as some of them even older versions of the mac, so I really
had difficulties trying to learn how to code and how to use these terminals in general. But it was definitely an experience and I
expanded my knowledge in how to use these different os systems. And debugging in general was something that
I encountered in this course that was kind of difficult to solve because I had never
encountered many of the bugs that had shown up in my students’ projects, so we worked
together and and we did a lot of debugging, a lot of revising, a lot of rebuilding, and
it ended up working. Some other future improvements would definitely
be the time management because I did feel like I was running very short on time, and
if I were to teach this course again in the future, I wouldn’t make it 6 weeks. I would either make it longer and cover all
the material or make it shorter into separate courses, so that they would get a more solid
knowledge on these different aspects of machine learning. One more thing I would like to do is expand
on my own knowledge on machine learning and this is by doing my own personal research
projects so that I can give my students more insight into what they are actually learning
about. I’m so impressed by everything that you did,
just in a couple of weeks. How many weeks was it? It was around 8 weeks (actually 10 weeks). Again thank you so much for Professor Han
for coming over to like help our students and encourage us. It really helped us and he supported us a
lot through this program. And we should all thank you for organizing
this class and providing all the guidance. In conclusion this was a great experience
and a wonderful opportunity that I’m very grateful to have. So I would like to once again thank Professor
Song Han for providing me with this opportunity by giving me, like, guidelines as well as
advice on how to run the course. He also helped me a lot in my own knowledge
of machine learning as well as letting me visit his lab over the summer, and this is
the reason why I am able to teach this course in the first place. So once again, thank you to Professor Han
and all of my students for making this course such a great experience and making it all
happen. See you guys next time.

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *