## What is cluster analysis?

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in one sense or another) to each other than to those in other groups (clusters). It is the main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics. Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances among the cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can, therefore, be formulated as a multi-objective optimization problem. The appropriate clustering algorithm and parameter settings (including values such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It is often necessary to modify data preprocessing and model parameters until the result achieves the desired properties.

## What is classification?

Classification in machine learning and statistics, is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Classification is an example of pattern recognition. In the terminology of machine learning, classification is considered an instance of supervised learning, i.e. learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering and involves grouping data into categories based on some measure of inherent similarity or distance. Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or features. These properties may variously be categorical (e.g. “A”, “B”, “AB” or “O”, for blood type), ordinal (e.g. “large”, “medium” or “small”), integer-valued (e.g. the number of occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other classifiers work by comparing observations to previous observations by means of a similarity or distance function. An algorithm that implements classification, especially in a concrete implementation, is known as a classifier.

## Numerai deep learning

I have used the python for coding the convolutional neural network. The code has been taken from [6] and was modified to work for this dataset. The convolutional neural network was built using the Theano library in Python [1]. The model is simplified by this 23 implementation because it does not implement location-specific gain and bias parameters and also it implements pooling by maximum and not by average. The LeNet5 model uses logistic regression for image classification. The convolutional neural network was trained by passing the compressed train, test and validation datasets. There is one bias per output feature map. The feature maps are convolved with filters and each feature map is individually downsampled using max-pooling. The compressed dataset is divided into small batch sizes so as to reduce the overhead of computing and copying data for each individual image. The batch size for this model is set to 500. We keep the learning rate which is the factor for the stochastic gradient as 0.1. The maximum number of epochs for running the optimizer is kept as 200 which means the learning for each label goes on for 200 epochs so as to optimize the network. When the first convolutional pooling layer is constructed, filtering reduces the image size to 24×24, which is further reduced to 12×12 by max-pooling. During the construction of the second convolutional pooling layer the image size is reduced to 8×8 by filtering and max-pooling reduces it further to 4×4. Since the hidden layer is fully-connected it operates on 2D matrices of rasterized images. This generates a matrix of shape (500, 800) with default values. The values of the fully-connected hidden layer are classified using Logistic Regression. The cost which is minimized during training is the negative log likelihood of the model. A Theano function [2] test model is constructed to compute the incorrect calculations that are made by the model. We create two lists, one of all model parameters that have to be fit by the gradient descent and the other of gradients of all model parameter. The updating of the model parameters by Stochastic Gradient Descent(SGD) is done by the Train Model which is a Theano function. Manually creating update rules for each model parameters results in being tedious because of many parameters present in this model. The updates list is thus created by looping over all pairs automatically. We keep a improvement threshold which means that a relative improvement of this much value is considered as significant. Once the training of the convolutional neural network is done, it is the train model that is returned.

Deep learning refers to a class of machine learning techniques, where many layers of information processing stages in hierarchical architectures are exploited for pattern classifi- cation and for feature or representation learning [10]. It lies in the intersections of several research areas, including neural networks, graphical modeling, optimization, pattern recognition, and signal processing, etc. [5] Yann LeCun adopted the deep supervised backpropagation convolutional network for digit recognition. In the recent past, it has become a valuable research topic in the fields of both computer vision and machine learning where deep learning achieves state-of-the art results for a variety of tasks. The deep convolutional neural networks (CNNs) proposed by Hinton came out first in the image classification task of Imagenet classification with deep convolutional neural networks. The model was trained on more than one million images, and has achieved a winning top-5 test error rate of 15.3% over 1, 000 classes. After that, some recent works got better results by improving CNN models. The top-5 test error rate decreased to 13.24% in by training the model to simultaneously classify, locate and detect objects. Besides image classification, the object detection task can also benefit from the CNN model, as reported in. Generally speaking, three important reasons for the popularity of deep learning today are drastically increased chip processing abilities (e.g., GPU units), the significantly lower cost of computing hardware, and recent advances in machine learning and signal/information processing research. Over the past several years, a rich family of deep learning techniques has been proposed and extensively studied, e.g., Deep Belief Network (DBN), Boltzmann Machines (BM), Restricted Boltzmann Machines (RBM), Deep Boltzmann Machine (DBM), Deep Neural 6 Networks (DNN), etc. Among various techniques, the deep convolutional neural networks, which is a discriminative deep architecture and belongs to the DNN category, has found state-of-the-art performance on various tasks and competitions in computer vision and image recognition. Specifically, the CNN model consists of several convolutional layers and pooling layers, which are stacked up with one on top of another. The convolutional layer shares many weights, and the pooling layer sub-samples the output of the convolutional layer and reduces the data rate from the layer below. The weight sharing in the convolutional layer, together with appropriately chosen pooling schemes, endows the CNN with some invariance properties (e.g., translation invariance). My work is similar to the work of Ji Wan et al.[10] but differs from them in the sense that the dataset I am using is different from the ones they have used in their study. Also my approach of image matching will be completely novel which has not been used in any study similar to mine.

Numerai deep learning example

The research in the past decade related to CBIR touched many aspects of the problem and it was seen that deep learning gave the best results. The problem of annotated images was also touched upon but it was not used with the deep learning method. In my thesis I propose to show better results for annotated images using not only the images but also the annotations provided with each image. I will be using convolutional neural network

Numerai deep learning example

I have worked in the past on CBIR using Bag-of-Words model with the same dataset. The results I get in my study have been evaluated against the results I achieved in my previous study and are discussed in chapter 5. The work has also been compared against the results shown in Ji Wan et al’s work [10] where they used deep learning for CBIR with various different datasets but they differ in the sense that the datasets used by them were just plain images without any annotations.

Numerai deep learning example

”Convolutional neural network (CNN) is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field” [11]. They are biologically-inspired invariant of Multilayer Perceptrons (MLP) which are designed for the purpose of minimal preprocessing. These models are widely used in image and video recognition. When CNNs are used for image recognition, they look at small portions of the input image called receptive fields with the help of multiple layers of small neuron collections which the model contains [11]. The results we get from this collection are tiled in order for them to overlap such that a better representation of the original image is obtained; every such layer repeats this process. This is the reason they are able if the input image is translated in any way. The outputs of neuron clusters are combined by local or global pooling layers which may be included in convolutional networks. Inspired by biological process, convolutional networks also contain various combinations of fully connected layers and convolutional layers, with point-wise nonlinearity applied at the end of or after each layer [11]. The convolution operation is used on small regions so as to avoid the situation when if all the layers are fully connected billions of parameters will exist. Convolutional networks use shared weights in the convolutional layers i.e. for each pixel in the layer same filter (weights bank) is used which is advantageous because it reduces the required memory size and improves performance. CNNs use relatively less amount of pre-processing as compared to other image classification algorithms,

Numerai deep learning example

CNNs enforce a local connectivity pattern between neurons of adjacent layers to exploit spatially-local correlation [6]. We have illustrated in fig.4.1 that in layer m the inputs of hidden units are from a subset of units in layer m-1, units containing spatially adjoining receptive fields.

Numerai deep learning example

Every filter hi in CNNs is duplicated across the complete visual field. The duplicated filters consists of the same parameters i.e. weights and bias that form a feature map. We can see in fig.4.2 that same feature map contains 3 hidden units. The weights of same color are shared that are constrained to be identical [6]. We can still use gradient descent to learn such shared parameters by altering the original algorithm by a very small margin. When the gradients of the shared parameters are summed, then it gives the gradient of a shared weight. We can detect the features regardless of their location in the visual field by duplicating the units. The huge reduction of the number of free parameters being learnt can lead to weight sharing increasing the learning efficiency. CNNs achieve better generalization on vision problems due to the constraints on these models.

Numerai deep learning example

We obtain a feature map by repeatedly applying a function across sub-regions of the entire image, mainly by convolution of the input image with a linear filter, adding a bias term and 19 then applying a non-linear function [6]. The k-th feature map can be denoted as h k at a given layer, whose filters we can determine by the bias b k and weights Wk , then we can obtain the feature map by the given equation:

Numerai deep learning example

depicts 2 layers of CNN. There are 4 feature maps in layer m-1 and 2 feature maps in hidden layer m (h 0 and h 1 ). The pixels of layer (m-1) that lie within their 2×2 receptive field in the layer below (colored squares) are used for the computation of the pixels in the feature maps h 0 and h 1 (blue and red squares). It can be observed that how all 4 input feature maps are spanned by the receptive field. As a result the 3D weight tensors are the weights and of and . The input feature maps is indexed by the leading dimensions, whereas the pixel coordinates is referred by the other two. When we combine it all as shown in fig.4.3, at layer m the weight that connects each pixel of the k-th feature map with the pixel of the l-th layer at layer (m-1) and at coordinates (i,j) is denoted [6] .

Numerai deep learning example

Max-pooling a form of non-linear down-sampling is an important concept of CNNs. The input image is partitioned into a group of non-overlapping rectangles and a maximum value is given for each such sub-region. We use max-pooling in vision for the following reasonsThe computation of upper layers is reduced by the removal of non-maximal values. Suppose a max-pooling layer is cascaded with a convolutional layer. The input image can be translated by a single pixel in 8 directions. 3 out of 8 possible configurations produce exactly the same output at the convolutional layer if max-pooling is done over a 2×2 region. This jumps to 5/8 for max-pooling over a 3×3 region [6]. A form of translation invariance is provided by this. The dimensionality of intermediate representations is reduced by max-pooling because it provides additional robustness to position.

Numerai deep learning example

The dataset I chose for this thesis is from the SUN database [12]. The major reason for choosing this dataset was that the images in it were pre-annotated and had annotations as XML files for each image. The SUN database is huge so I had to choose a small subset of it for this study. In this study I am trying to classify images based on 8 classes namely: water, car, mountain, ground, tree, building, snow, sky and unknown which contains all the rest of the classes. I chose only those sets of images which I felt were more relevant to these classes. I collected a database of 3000 images from 41 categories. Each image has its annotations in an XML file. I randomly divided the dataset into 80% training set and 20% testing. There are 1900 training images, 600 testing images and 500 validation images. The training set was further divided into 80% training set and 20% validation set. The major drawback of this dataset is that the images are annotated by humans and the annotations are not perfect thus it may have some effect on the results. I try to handle this problem by getting as many synonyms as I can for each class label. A few examples of the synonyms are lake, lake water, sea water, river water, wave, ripple, river, sea, river water among others which all belong to the class label water. I mapped these synonyms to their respective class labels which are being used. Not all images in every categories were annotated. I filtered out the annotated images from the dataset and used only them for this study. Fig.4.5 shows an example of an image from the dataset and its annotation file where it can be seen how a river is annotated by the user

## Data Science Courses

During the last decade, a great scientific effort has been invested in the development of methods that could provide efficient and effective detection of botnets. As a result, various detection methods based on diverse technical principles and various aspects of botnet phenomena have been defined. Due to promise of non-invasive and resilient detection, botnet detection based on network traffic analysis has drawn a special attention of the research community. Furthermore, many authors have turned their attention to the use of machine learning algorithms as the mean of inferring botnet-related knowledge from the monitored traffic. This paper presents a review of contemporary botnet detection methods that use machine learning as a tool of identifying botnet-related traffic. The main goal of the paper is to provide a comprehensive overview on the field by summarizing current scientific efforts. The contribution of the paper is threefold. First, the paper provides a detailed insight on the existing detection methods by investigating which bot-related heuristic were assumed by the detection systems and how different machine learning techniques were adapted in order to capture botnetrelated knowledge. Second, the paper compares the existing detection methods by outlining their characteristics, performances, and limitations. Special attention is placed on the practice of experimenting with the methods and the methodologies of performance evaluation. Third, the study indicates limitations and challenges of using machine learning for identifying botnet traffic and outlines possibilities for the future development of machine learning-based botnet detection systems.

Data Science Courses

Data Science Courses

Data Science Courses

Data Science Courses

Data Science Courses

Data Science Courses

Data Science Courses

Data Science Courses

## Machine Learning.

Supervised versus unsupervised learning Machine learning is a branch of artificial intelligence that uses algorithms, for example, to find patterns in data and make predictions about future events. In machine learning a dataset of observations called instances is comprised of a number of variables called attributes. Supervised learning is the modeling of these datasets 46 Table 3.1: An example of a supervised learning dataset Time x1 x2 x3 x4 x5 x6 x7 y 09:30 b n -0.06 -116.9 -21.7 28.6 0.209 up 09:31 b b 0.06 -85.2 -61 -21.7 0.261 unchanged 09:32 b b 0.26 -4.4 -114.7 -61 0.17 down 09:33 n b 0.11 -112.7 -132.5 -114.7 0.089 unchanged 09:34 n n 0.08 -128.5 -101.3 -132.5 0.328 down containing labeled instances. In supervised learning, each instance can be represented as (x, y), where x is a set of independent attributes (these can be discrete or continuous) and y is the dependent target attribute.

Machine Learning

The target attribute y can also be either continuous or discrete; however the category of modeling is regression if it contains a continuous target, but classification if it contains a discrete target (which is also called a class label). Table 3.1 demonstrates a dataset for supervised learning with seven independent attributes x1, x2, . . . , x7, and one dependent target attribute y. More specifically, x1, x2 ∈ {b, n} and x3, . . . , x7 ∈ R and the target attribute y ∈ {up,unchanged,down}. The attribute time is used to identify an instance and is not used in the model. Also the training and test datasets are represented in the same way however, where the training set contains a set of vectors of known label (y) values, the labels for the test set is unknown. In unsupervised learning the dataset does not include a target attribute, or a known outcome. Since the class values are not determined a priori, the purpose of this learning technique is to find similarity among the groups or some intrinsic clusters within the data. A very simple two-dimensional (two attributes) demonstration is 47 Figure 3.1: An example of an unsupervised learning technique – clustering shown in Figure 3.1 with the data partitioned into five clusters.

Machine Learning

A case could be made however that the data should be partitioned into two clusters or three, etc.; the “correct” answer depends on prior knowledge or biases associated with the dataset to determine the level of similarity required for the underlying problem. Theoretically we can have as many clusters as data instances, although that would defeat the purpose of clustering. Depending on the problem and the data available, the algorithm required can be either a supervised or unsupervised technique. In this thesis, the goal is to predict future price direction of the streaming stock dataset. Since the future direction becomes known after each instance, the training set is constantly expanding 48 with labeled data as time passes. This requires a supervised learning technique. Additionally, we explore the use of different algorithms since some may be better depending on the underlying data. Care should be taken to avoid, “when all you have is a hammer, everything becomes a nail.” 3.3 Supervised learning algorithms 3.3.1 k Nearest-neightbor The k nearest neighbor (kNN) is one of the simplest machine learning methods and is often referred to as a lazy learner because learning is not implemented until actual classification or prediction is required. It takes the most frequent class as measured by the weighted euclidean distance (or some other distance measure) among the k closest training examples in the feature space. In specific problems such as text classification, kNN has been shown to work as well as more complicated models [240]. When nominal attributes are present, it is generally advised to arrive with a “distance” between the different values of the attributes [236]. For our dataset, this could apply to the different trading days, Monday, Tuesday, Wednesday, Thursday, and Friday.

Machine Learning

Machine Learning

More specifically, precision is the number of correctly identified positive examples divided by the total number of examples that are classified as positive. Recall is the percentage of positives correctly identified out of all the existing positives (Equation 3.4); it is the number of correctly classified positive examples divided by the total number of true positive examples in the test set. From our imbalanced example above with the 99% small moves and 1% large moves, precision would be how often a large move was correctly identified as such, while recall would be the total number of large moves that are correctly identified out of all the large moves in the dataset. Precision = T P T P + F P (3.3) Sensitivity (Recall) = T P T P + F N (3.4) Specificity = T N T N + F P (3.5) F-measure = 2(precision)(recall) precision + recall (3.6) Precision and recall are often achieved at the expense of the other, i.e. high 61 precision is achieved at the expense of recall and high recall is achieved at the expense of precision. An ideal model would have both high recall and high precision. The F-measure5, which can be seen in Equation 3.6, is the harmonic measure of precision and recall in a single measurement. The F-measure ranges from 0 to 1, with a measure of 1 being a classifier perfectly capturing precision and recall. 3.4.3 Kappa The second approach to comparing imbalanced datasets is based on Cohen’s kappa statistic. This metric takes into consideration randomness of the class and provides an intuitive result. From [14], the metric can be observed in Equation 3.7 where P0 is the total agreement probability and Pc is the agreement probability which is due to chance. κ = P0 − Pc 1 − Pc (3.7) P0 = � I i=1 P(xii) (3.8) Pc = � I i=1 P(xi.)P(x.i) (3.9) The total agreement probability P0 (i.e. the classifier’s accuracy) can be be computed according to Equation 3.8, where I is the number of class values, P(xi.) is the row marginal probability and P(x.i) is the column marginal probability, with both obtained from the confusion matrix. The probability due to chance, Pc, can be computed according to Equation 3.9. The kappa statistic is constrained to the interval 5The F-measure, in the literature is also called the F-score and the F1-score. 62 Table 3.3: Computing the Kappa statistic from the confusion matrix (a) Confusion matrix – Numbers Predicted class up down flat Actual up 139 80 89 308 class down 10 298 13 323 flat 40 16 313 369 189 396 4157 1000 (b) Confusion matrix – Probabilities Predicted class up down flat  Actual up 0.14 0.08 0.09 0.31 class down 0.01 0.30 0.01 0.32 flat  0.04 0.02 0.31 0.37 0.19 0.40 0.42 1.00 [−1, 1], with a kappa κ = 0 meaning that agreement is equal to random chance, and a kappa κ equaling 1 and -1 meaning perfect agreement and perfect disagreement respectively. For example, in Table 3.3a the results of a three-class problem are shown, with the marginal probabilities calculated in Table 3.3b. The total agreement probability, also known as accuracy, is computed as P0 = 0.14 + 0.30 + 0.31 = 0.75, while the probability by chance is Pc = (0.19×0.31) + (0.40×0.32) + (0.42×0.37) = 0.34. The kappa statistic is therefore κ = (0.75 − 0.34)/(1 − 0.34) = 0.62. 3.4.4 ROC The third approach to comparing classifiers is the Receiver Operating Characteristic (ROC) curve. This is a plot of the true positive rate, which is also called recall or 63 Figure 3.5: ROC curve example sensitivity (Equation 3.10), against the false positive rate, which is also known as 1-specificity (3.11). T P R = T P T P + F N (3.10) F P R = F P T N + F P (3.11) The best performance is noted by a curve close to the top left corner (i.e. a small false positive rate and a large true positive rate), with a curve along the diagonal reflecting a purely random classifier. As a demonstration, in Figure 3.5 three ROC curves are displayed for three classifiers. Classifier 1 has a more ideal ROC curve than Classifier 2 or 3. Classifier 2 is slightly better than random, while Classifier 3 is worse. In Classifier 3’s case, it would be better to choose as a solution that is opposite of what the classifier predicts. 64 For single number comparison, the Area Under the ROC Curve (AUC) is calculated by integrating the ROC curve. Random would therefore have an AUC of 0.50 and a classifier better and worse than random would have an AUC greater than and less than 0.50 respectively. It is most commonly used with two-class problems although with multi-class examples the AUC can be weighted according to the class distribution. AUC is also equal to the Wilcoxon statistic. 3.4.5 Cost-based The cost-based method of evaluating classifiers is based on the “cost” associated with making incorrect decisions [61, 65, 102]. The performance metrics seen thus far do not take into consideration the possibility that not all classification errors are equal. For example, an opportunity cost can be associated with missing a large move in a stock. A cost can also be provided for initiating an incorrect trade. A model can be built with a high recall, which misses no large moves in the stock, but the precision would most likely suffer. The cost-based approach gives an associated cost to this decision which can be evaluated to determine the suitability of the model. A cost matrix is used to represent the associated cost of each decision with the goal of minimizing the total cost associated with the model. This can be formalized with a cost matrix C and the entry (i, j) with the actual cost i and the predicted class j. When i = j the prediction is correct and when i �= j the prediction is incorrect. An advantage of using a cost-based evaluation metric for trading models is the cost associated with making incorrect decisions is known by analyzing empirical 65 data.

Machine Learning

For example all trades incur a cost in the form of a trade commission and money used in a trade is temporarily unavailable, thus incurring an opportunity cost. Additionally, a loss associated with an incorrect decision can be averaged over similar previous losses; gains can be computed similarly. Consider, for example, a trading firm is attempting to predict the directional price move of a stock with the objective to trade on the decision. At time t, the stock can move up, down, or have no change in price; at time t+n, the direction is unknown (this can be observed in Figure 3.6). For time t + 1, a prediction of up might result in the firm purchasing the stock. Different errors in classification however would have different associated cost. A firm expecting a move up would purchase the stock in anticipation of the move, but a subsequent move down would be more harmful than no change in price. A actual move down would immediately result in a trading loss, whereas no change in price would result in an temporary opportunity cost with the stock still having the potential to go in the desired direction. Additionally an incorrect prediction of “no change” would merely result in an opportunity lost, but no actual money being put to risk since a firm would not trade based on the anticipation of a unchanged market (no change). Table 3.4 represents a theoretical cost matrix of the problem, with three separate error amounts represented: 0.25, 0.50, and 1.25. 3.4.6 Profitability of the model While the end result of predicting stock price direction is to increase profitability, the performance metrics discussed thus far (with the exception of the cost-based 66 Predicted class Down No change Up Actual Down 0 0.25 1.25 class No change 0.50 0 0.50 Up 1.25 0.25 0 metric) evaluate classifiers based on the ability to correctly classify and not on overall profitability of a trading system. As an example, a classifier may have very high accuracy, kappa, AUC, etc. but this may not necessarily equate to a profitable trading strategy, since profitability of individual trades may be more important than being “right” a majority of times; e.g. making $0.50 on each of one hundred trades is not as profitable as losing$0.05 95 times and then making \$12 on each of five trades6. Figure 3.7 represents a trading model represented in much of the academic literature, where the classifier is built on the data with a prediction of up, down, or no change in the market price with the outcome passed to a second set of rules. These 6An argument can also be made that a less volatile approach is more ideal (i.e. making small sums consistently). This depends on the overall objective of the trader – maximizing stability or overall profitability. 67 Figure 3.7: Trading algorithm process rules provide direction if a prediction of “up”, for example, should equate to buying stock, buying more stock, or buying back a position that was previously shorted. The rules also address the amount of stock to be purchased, how much to risk, etc. When considering profitability of a model, the literature generally follows the form of an automated trading model, which is “buy when the model says to buy, then sell after n number of minutes/hours/days [161]” or “buy when the model says to buy, then sell if the position is up x% or else sell after n minutes/days/hours [138, 164, 202].” Teixeira et al. [214] added another rule (called a “stop loss” within trading), which prevented losses from going past a certain dollar amount during an individual trade. The goal of this thesis is not to provide an “out of the box” trading system with proven profitability, but to instead help the user make trading decisions with the help of machine learning techniques. Additionally, there are many different rules in the trading literature relating to how much stock to buy or sell, how much money to risk in a position, how often trades should take place, and when to buy and sell; each of these questions are enough for entire dissertations. In practice, trading systems often involve many layers of controls such as forecasting and optimization methodologies 68 Table 3.5: Importance of using an unbiased estimate of its generalizability – trained using the dataset from Appendix B for January 3, 2012 January 3, 2012 (training data) January 4, 2012 (unseen data) Accuracy 94.713% 37.31% that are filtered through multiple layers of risk management. This typically involves a human supervisor (risk manager) that can make decisions such as when to override the system [69]. The focus of this paper therefore, will remain on the classifier itself; maximizing predictability when faced with different market conditions.

Machine Learning

Machine Learning

Machine Learning

Machine Learning

## deep learning,data mining,machine learning,artificial intelligence,kaggle,tensor flow,data scientist,neural network,what is machine learning,machine learning algorithms

Regularization
Multiple regression
Model validation
Precision
Recall
ROC curve
Predictive model
Overfitting
Loss function
L1/L2 Regularization
Response variable
Estimation
Multi-Collinearity
Resampling
Jackknife resampling/Jacknifing
MSE – mean squared error
Selection bias
Local Max/Min/Optimum
A/B Testing
Web Analytics
Root cause analysis
Big data
Data minig
Binary hipotesis test
Null hypotesis (H0)
Alternative Hypotesis (H1)
Statistical Power
Type I error
Type II error
Bootstrapping
Cross-Validation
Ridge regression
Lasso
K-means clustering
Semantic Indexing
Principal Component Analysis
Supervised learning
Unsupervised learning
False positives
Fase negatives
NLP
Feature vector
Random forrest
Support Vector Machines
Colaborative Filtering
N-grams
Cosine distance
Naive Bayes
Boosted trees
Decision tree
Stepwise regression
Intercept
Impurity measures
Maximal margin classifier
Kernel
Kernel trick
Dimensionality reduction
Dimensionality course
Newton’s method

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

DATA SCIENCE FAQ
DATA SCIENNCE QUESTIONS
DATA SCIENCE DICTIONARY
DATA SCIENCE WIKI
MACHINE LEARNING
DEEP LEARNING
PREDICTIVE ANALYTICS
ARTIFICIAL INTELLIGENCE
NEURAL NETWORKS
RECCURENT NEURAL NETWORK
CNN
RNN
LSTM
(AI) Accuracy
(capital Association
(DAG) Attribute
(DAG). Categorical
(database, Continuous
(error/loss) A
(example, Classifier
(field, Confusion
(graph The
(incorrect) Cost
(iterative Cross-validation
(JSON)A Data
(Katz) Dimension
(LP). Error
(mathematics). Feature
(mean) Inducer
(MOLAP, Instance
(most Knowledge
(multi-dimensional Loss
(numbers Machine
(often In
(or Missing
(PageRank) Model
(pdf) OLAP
(record, On-Line
(see Record
(SNA) see
(sometimes Regressor
(SRS) Resubstitution
(the Schema
(UDF) Sensitivity
(x_{1},x_{2}, True
[ALPHA] Specificity
[BETA] Supervised
[DEPRECATED] Techniques
\frac Tuple
\frac{(x-i Unsupervised
\hat Aggregation
\le avg
\mu count
\mu)^{2}}{2i count_distinct
\mu, max
\pi}} min
\sigma stdev
\sigma) sum
\sigma^2}} var
\sqrt{2 Alpha
\sum_{i=1}^{n} Alternating
{x_{i} API
“bias” Functions
“noise” [ALPHA]
“prior” [BETA]
“system” [DEPRECATED]
“The Arity
“variance” ASCII
= Abbreviated
A Average
A. Bayesian
A/B Contrast
Abbreviated For
Accuracy Belief
accuracy, Beta
Acyclic Bias
Aggregation Character-Separated
algorithm Classification
algorithm. Clustering
Algorithmic Collaborative
algorithms Comma-Separated
algorithms, Community
all Plural
Allocation Conjugate
Allocation: Trusted
allow Connected
allows Convergence
Alpha Where
also CSV
Alternating Degree
American Deprecated
amounts Directed
An ECDF
Analysis Edge
analysis, Empirical
Analytical \hat
Analytics \sum_{i=1}^{n}
AnalyticsAnalysis Enumerate
analyze Verb
and Equal
anonymous Extract,
any Transforms
API ETL
approach F-Measure
approaches F-Score
are F1
Arity float32
around float64
article Frame
Artificial GaBP
As Gaussian
as: Normal
ASCII will
Association is
assumption approaches
at f(x,
Attribute e^{-i
attributes \mu
automatically \sigma
Average Global
avg Graph
B Traversals
based Statistics
Bayesian Some
be As
because left:
behavior right:
Belief inner:
bell-shaped So
best-fitting HBase
Beta Apache
between Hyperparameter
Bias Parameter
Bias-variance int32
Big An
binary int64
Binning Ising
bits You
Both JSON
branch K-S
by Label
C Labeled
calculation Many
calculation) Lambda
can This
case Further
case, Related
Categorical Warning
category Latent
centered [A]
Central Least
Centrality LineFile
Centrality. Local
Centrality: Loopy
Characteristic MapReduce
Characteristic: Markov
Character-Separated Online
class OLTP
Classification PageRank
Classifier Precision/Recall
cleaning/cleansing Property
Clickstream Python
Clustering Related:
Clustering. Quantile
Code One
code. RDF
Coefficient. Recommendation
Collaborative Recommender
collection Resource
column ROC
columns Row
Comma-Separated Refer
common Semi-Supervised
commonly Simple
Commonly, Smoothing
Community Wikipedia:
complex Stratified
Component Superstep
computed Supersteps
computer Tab-Separated
computing, Topic
conditional Transaction
Confusion Transactional
Conjugate UDF
conjunctive Undirected
Connected Unicode
connection Vertex
consisting A/B
containing B
context, Big
Continuous C
Contrast Clickstream
Convergence D
correct DatabaseA
corresponding F
Cost Federated
Cost. G
count Geolocation
count_distinct I
counting IngestionThe
Cross-validation J
CSV Javascript
Cumulative K
curve Key
Customer L
customers Live
D ?
Data N
data, O
DataAggregated Omnichannel
Database P
database, PortabilityAbility
DatabaseA R
databases S
defined Third
defines –
Degree learning
Degree: (field,
dependencies matrix
deployment proportion
Deprecated cleaning/cleansing
Depth mining
derived set
Descent rate
describe vector
describing /
description (example,
Detection discovery
deviation value
deviation. structure
different deployment
Dimension use
Directed (MOLAP,
direction Analytical
Directions mapping
Dirichlet description
discovery positive
Discovery, negative
discrete used
displaying List
displays representation
distributed Function
distributed, mathematical
Distribution :
distribution. method
Distribution: Maturity
DistributionA Indicates
e^{-i logic,
each Path
ECDF network
Edge Inference
Edge, probabilistic
either with
Element more
Empirical Propagation
end vs
Enumerate Tendency
Equal typical
Error (Katz)
especially (PageRank)
estimating Centrality.
ETL Values
evenly file
event process
examine Filtering
examples Variables
examples, complex
explanations Matrices
Extract, form
F Analytics
f(x, Component
F) Acyclic
F_{n}(t) mathematics
F1 connection
Factorization Cumulative
fall F_{n}(t)
Feature observations
feature)- Indicator\{x_{i}
features Indicator\{A\}
Federated —
Field. Depth
Fields Width
fields, Transform,
file computing,
Filter it
Filtering systems
find F-Measure.
finding Score
finite real
fit (capital
float table
float32 class
float64 special
F-Measure Distribution
F-Measure. group
For fall
form evenly
format zero
format. \mu,
formed \frac{(x-i
found Random
Frame also
Framework –
From are
from: Database
F-Score Element
Function that
Function. particular,
Functionality integer
Functions can
Further Test
G statistics,
GaBP Social
Gaussian multi-pass
Geolocation machine-learning
Global from:
Graph examples
Graph. term:
graphical term
graphs Dirichlet
graphs. generative
group Squares
grouping algorithm
groups works
has Map
have User-defined
HBase Operating
how signal
Hyperparameter specific
I to
identically computer
identifying means
implements Relaxation
implication Sampling
importance single
Important Modeling
improving models
In Processing
independent Functionality
index TestingAnalysis
Indicates AnalyticsAnalysis
indicator collection
Indicator\{x_{i} Object
Inducer LearningA
induction DistributionA
Inference MarketingA
inferences EngineSoftware
information of
information, Party
information: correct
IngestionThe variable,
inner: Commonly,
input (record,
Instance induction
instances case,
instances. Cost.
int32 Discovery,
int64 and
integer ROLAP)
integrate Processing.
Intelligence vector.
intelligence. (error/loss)
interactions, Tags
Interchange, mathematics,
interpretation American
intersection Length
into topology,
is graphical
Ising information,
it Variance
iteration context,
iterative Centrality:
J theory
Javascript containing
JSON Clustering.
jumps grouping
K Values.
Katz Detection
Key networks,
key-value learning,
Knowledge Descent
known Platform
Kolmogorov|EM|Smirnov information:
K-S theory,
L calculation
Label information
Labeled x
Lambda =
large \le
largest Binning
Latent into
learn number
learned F)
learning case
learning, between
Learning. centered
Learning: on
LearningA \sigma)
Least \mu)^{2}}{2i
left: Fields
Length Coefficient
like category
limits, Algorithmic
line-delimited. Important
LineFile user-guided
List Directions
list. shorthand,
literal calling
Live input
Local Kolmogorov|EM|Smirnov
logic, reference:
Loopy algorithms
Loss researchers
Machine Stanford:
machine-learning Allocation
makes format
manipulate fields,
Many by
Map Precision
map. recognition
mapping Function.
MapReduce Characteristic
MapReduce. or
marketing Framework
MarketingA (iterative
Markov iteration
mathematical refers
mathematics provide
mathematics, science,
Matrices type
matrix study
matrix). that,
Maturity system
max sensor
may Notation
MDA PairA
mean bell-shaped
mean, marketing
mean. move
means DataAggregated
measure (incorrect)
measurement feature)-
Meets finite
member subset
membership measurement
method tuple)
method) independent
methods record)
metric non-trivial
min corresponding
minimizing Usually
mining unlabeled
Model (see
model. learn
Modeling which
models largest
MOLAP standard
more result
most Tags.
move section
multidimensional Propagation.
multi-pass “bias”
multiple training,
N tabular
navigates predicting
needs, implements
negative Factorization
negative) (often
Neighborhood see:
network (DAG)
Network, Graph.
Network. either
networks, \frac
nonparametric t\}.
non-trivial specify
Normal places
Notation outside
number fit
number, commonly
numbers, values,
O two
Object around
observations \sigma^2}}
of mean
often walk-through
OLAP attributes
OLTP interactions,
Omnichannel frame’s
on like
One any
Online Test:
On-Line Analysis
Operating (LP).
operational have
or explanations
organized often
other programming
outside measure
over counting
P System:
PageRank algorithms,
PageRank. reduce
PairA method)
Parameter Processing:
parameter, consisting
particular, Degree:
partitions Distribution:
Party because
Path user
pattern receiving
patterns (JSON)A
performance common
places strategy
Platform data,
Plural uses
PortabilityAbility Customer
positive predictions
positive) specification
Precision several
precision. interpretation
Precision/Recall learned
predicted synonymous
predicting instances
predictions (mean)
prior (most
probabilistic deviation
probability may
problem Code
procedure representing
process Meets
Processing sub-graph
Processing. step
Processing: (x_{1},x_{2},
programming {x_{i}
Propagation indicator
Propagation. each
Property column
proportion sources
provide operational
Python end
quality metric
quantifies 32
Quantile 64
quantity rows
R side
Random defined
randomized \frac{1}{
rate deviation.
RDF vertices
reaches Coefficient.
real methods
Recall: index
receiving this:
Recommendation prior
Recommender member
Record (SNA)
record) trained
records found
reduce Learning:
Refer parameter,
reference: Allocation:
refers allows
Regressor finding
Related branch
Related: approach
relationship Recall:
relative Tinkerpop:
Relaxation (UDF)
replacing Characteristic:
representation storing
representing defines
researchers most
Resource Edge,
Resubstitution graphs
result randomized
retrieval behavior
right: organized
ROC Artificial
ROLAP) especially
Row find
rows quantity
S showing
sample estimating
Sample. improving
Sampling identically
Sampling. model.
ScalabilityAbility set’s
scale matrix).
Schema relationship
science, unique
Score positive)
section negative)
see: be
Semi-Supervised accuracy,
Sensitivity problem
sensor probability
set analysis,
sets (numbers
set’s membership
several based
shorthand, describing
showing algorithm.
side iterative
signal …
single needs,
Smoothing target
So bits
Social limits,
Some mean,
somewhat as:
sources displaying
special article
specific terminology
Specifically, “The
specification database,
Specificity float
specify there
Squares vertices.
standard literal
Stanford: sets
Statistics records
statistics, MapReduce.
stdev Network,
step importance
steps PageRank.
storing retrieval
strategy key-value
Stratified (sometimes
string graphs.
structure sample
study “noise”
sub-graph way
subset (DAG).
such (or
suffix volume
sum multiple
summarizes tracks
Superstep Intelligence
Supersteps displays
Supervised large
synonymous derived
system automatically
System: conjunctive
systems discrete
T numbers,
t\}. distributed
t}{n} identifying
table not
Tab-Separated summarizes
tabular MOLAP
Tags computed
Tags. all
target at
Techniques conditional
Tendency Network.
term while
term: distribution.
terminology Specifically,
Test Neighborhood
Test: calculation)
TestingAnalysis (mathematics).
text) (graph
that x_{n}),
that, t}{n}
The event
theory (database,
theory). integrate
theory, quantifies
there columns
Third when
This \sqrt{2
this: vertices,
Tinkerpop: suffix
to (the
together number,
Topic format.
topology, Types:
tracks Learning.
trained makes
Transaction map.
Transactional partitions
Transform, replacing
Transforms (SRS)
Traversals Sampling.
triplets string
True valency)
Trusted customers
Tuple allow
tuple) databases
two other
type (AI)
Types: how
typical amounts
UDF scale
Undirected implication
Unicode predicted
unique quality
unlabeled somewhat
Unsupervised together
use features
used instances.
user valid,
User-defined known
user-guided (multi-dimensional
uses list.
Usually over
valency) multidimensional
valid, Filter
value different
Values has
values, Interchange,
Values. steps
var dependencies
variable, “variance”
Variables minimizing
Variance text)
various patterns
vector performance
vector. (pdf)
Verb reaches
Vertex theory).
vertices formed
vertices, jumps
vertices. A.
volume groups
vs precision.
walk-through manipulate
Warning mean.
way \pi}}
when triplets
Where examine
which such
while navigates
Width relative
Wikipedia: distributed,
will assumption
with “prior”
works nonparametric
x various
x_{n}), examples,
You code.
zero curve
? line-delimited.

## Data Science Definition. What is Data Science?

There is much debate on it, but the short definition of data science is:

“Data science is an interdisciplinary field of using scientific methods to get information from data in various forms.”

Data science involves using methods from fields of statistics, computer science and mathematics, to interpret data for business decisions. Amounts of data available in modern society grow with the input of technology in peoples lives. These massive sets of structured and unstructured data help to show patterns and trends for business opportunities or academic research. One can see an increasing number of traditional fields of science with adjective ‘computational’ or ‘quantitative’. In industry, data science transforms everything from healthcare to media and this trend shall pick up in future.

## Machine Learning Definition

Machine learning is subfield of science, that provides computers with the ability to learn without being explicitly programmed.   The goal of machine learning is to develop learning algorithms, that do the learning automatically without human intervention or assistance, just by being exposed to new Data Science The machine learning paradigm can be viewed as “programming by example”. This subarea of artificial intelligence intersects broadly with other fields like, statistics, mathematics, physics, theoretical computer science and more.