Взрывной рост
IT в последние десятилетия принес в нашу
жизнь много разных чудес. Мы их не
замечаем, поскольку успеваем привыкнуть
к ним за то время, что требуется индустрии
на создание масспоп продукта. Но чудеса
от этого не становятся менее чудесатыми.
А дальше –
больше. Количество непременно перерастает
в качество. Никакой фантазии не хватает
на то, чтобы представить, как изменится
обстановка вокруг нас лет через 50. Если
не будет войны, конечно.
Возьмем, к
примеру, обработку данных и аналитику.
Казалось бы, ерунда какая, сто лет в
обед. Но количество принесло новое
качество. Шустрые каналы связи, огромные
вычислительные мощности, океан уже
накопленных и продолжающих поступать
с ускорением данных – получаем онлайн
аналитику и разнообразные предсказатели.
Data Mining & Machine Learning вышли из лабораторий
и идут в народ.
Любой может
создать и обучить электронного болвана
помощника, подсказывающего, какой едой
можно затариться. Пользоваться им и
того легче – просто сфоткай интересующий
продукт на мобилку:
Bringing Deep Learning
to the Grocery Store
...
when we go to the
grocery store, it can be difficult to really know exactly what we're
purchasing and where it comes from.
Inspired by this
problem, a few of us decided to build an application that provides
information on a packaged food product based on an image taken with a
smartphone. In a future blog post, we will share what we built and
how we built it. In this notebook, however, we delve deeper into the
actual implementation.
This notebook is
divided into 5 main parts:
Data Acquisition
- Downloading the data and deduplicating it with the deduplication
toolkit.
Finding Similar
Foods - Pre-computation which identifies similar foods within the
datset. This is useful in similar item recommendations
Image Feature
Extraction - Finding a vector representation of images in the dataset
using a Deep Learning model
Building the
Nearest Neighbor Model/Querying the catalog - Building a model with
which you can match a new photo to one in the dataset.
Building a
Predictive Service - Turning all our hard work into a hosted service,
which later is queried by our phone app!
...
А что там под
капотом? Нивапрос, народ интересуется,
мы отвечаем:
Top 10 data mining
algorithms in plain English
…
Today, I’m going
to explain in plain English the top 10 most influential data mining
algorithms
…
1. C4.5
2. k-means
3. Support vector
machines
4. Apriori
5. EM
6. PageRank
7. AdaBoost
8. kNN
9. Naive Bayes
10. CART
…
И даже можно
подробно разжевать – как построить
свою собственную нейросеть, с блекджеком
и шлюхами, на Python:
How to implement a
neural network
…
These tutorials
focus on the implementation and the mathematical background behind
the implementations. Most of the time, we will first derive the
formula and then implement it in Python.
The tutorials are
generated from IPython Notebook files, which will be linked to at the
end of each chapter so that you can adapt and run the examples
yourself. The neural networks themselves are implemented using the
Python NumPy library which offers efficient implementations of linear
algebra functions such as vector and matrix multiplications
…
Поглубже
копнуть специфические свойства
рекуррентных нейросетей:
The Unreasonable
Effectiveness of Recurrent Neural Networks
…
There's something
magical about Recurrent Neural Networks (RNNs). I still remember when
I trained my first recurrent network for Image Captioning. Within a
few dozen minutes of training my first baby model (with rather
arbitrarily-chosen hyperparameters) started to generate very nice
looking descriptions of images that were on the edge of making sense.
Sometimes the ratio of how simple your model is to the quality of the
results you get out of it blows past your expectations, and this was
one of those times. What made this result so shocking at the time was
that the common wisdom was that RNNs were supposed to be difficult to
train (with more experience I've in fact reached the opposite
conclusion). Fast forward about a year: I'm training RNNs all the
time and I've witnessed their power and robustness many times, and
yet their magical outputs still find ways of amusing me. This post is
about sharing some of that magic with you.
We'll train RNNs
to generate text character by character and ponder the question "how
is that even possible?"
...
Ну и всяких
прочих источников вдохновения:
Materials for Learning
Machine Learning
original post http://vasnake.blogspot.com/2015/06/ml.html
Комментариев нет:
Отправить комментарий