Export or edit this event...

TensorFlow Northwest User Group

926 NW 13th Ave #200
Portland, OR 97209, us (map)



Inference, Deep Neural Nets, Hyper-Parameters, Sign Language and Visual Insights

Welcome to TensorFlow Northwest. It was exciting to have so many of you come out during February’s snow-mageddon. This month, we prepared a balanced agenda to have new contents, and to incrementally build on what you’ve learned. On the other hand, we built the labs in a way that will allow those who couldn’t beat the storm to catch up quickly.

Al Kari will begin with an introduction and a technology update, then he moves on to cover a key topic in TensorFlow: Saving your models so you can use them to classify new data. The discussion will cover best practices and techniques to use your trained models for inference and transfer learning.

Al is Manceps’ Co-founder and CEO (www.manceps.com) where he helps enterprise clients with their Digital Transformation and intelligent computing roadmaps.

Next, Andrew Ferlitsch will take you on a code-along adventure via Colaboratory- An Introduction to Constructing Neural Networks and Hyper-parameter Tuning.

You’ve kicked the tires with 5 digits hand signs last month. Now, we progressively build layers of a neural network for the larger dataset of the American Sign Language hand gestures including all the letters of the alphabet. We will cover a fully connected neural network, proceed to a deep neural network, and then add dropout layers to tackle overfitting. Then we wrap up with the fundamentals of hyper-parameter tuning. The lab requires a laptop with a modern browser and will be at a modest pace to provide sufficient opportunity to ask questions and share insights.

As Chief Data Scientist at Manceps, Andrew works on solving a diverse set of problems using Deep Learning and advanced analytics.

Finally, David Molina will demystify Neural Networks once and for all, visually! What does bias do in a Neural Network? Why are weights important? What happens when we select a high learning rate? How do we know how many inputs to select? How many hidden layers a model needs in order to predict accurately? How does a Neural Network work? These are questions we ask when we begin studying Machine Learning as we encounter our first Neural Network. David will explore visually what’s behind the code, and what happens inside a Neural Network when it runs.

David is an Industrial Engineer with five years corporate experience in building databases, managing and analyzing large data sets, and optimizing systems and processes. He is currently studying Machine Learning and consulting for companies in his native country of Colombia.

REMEMBER to bring your laptop

Pizza and drinks sponsored by Intel. Links to the labs will be posted shortly.

Hope you have a great time learning and networking.