Skip to content

vitasiku/python-

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python Codes

Keras version used in models: keras==1.1.0 | LSTM 0.2

Python - Autoencoder MNIST: is an autoencoder model for classification of images developed with Keras, for the MNIST dataset, with model Checkpoint as a callback to save weights.

Python - Autoencoder for Text Classification: is an autoencoder model for classification of text made with Keras, also with model Checkpoint.

Python - Deep Learning with Lasagne: is a deep neural network developed with Lasagne, where you can see values of weights in each layer, including bias.

Python - Face Recognition: is a model using OpenCV to detect faces.

Python - Image Extraction from Twitter: is a model that extracts pictures and their links from Twitter webpages, plotting with matplotlib.

Python - Keras Convolutional Neural Network: is a CNN developed to classify the MNIST dataset with an accuracy greater than 99%.

Python - Keras Deep Regressor: is a deep Neural Network for prediction of a continuous output made with Keras, learning rate scheduler according to derivative of error, random initial weights, with loss history.

Python - Keras LSTM Network: is a Recurrent Neural Network (LSTM) to predict and generate text.

Python - Keras Multi Layer Perceptron: is a MLP model, Neural Networks made with Keras with loss history, scheduled learning rate according to derivative of error for prediction and classification.

Python - Machine Learning: is a Principal Components Analysis followed by a Linear Regression.

Python - NLP Doc2Vec: is a Natural Language Processing model where I asked a Wikipedia webpage a question and 4 possible answers were semantically chosen from the tokenized and vectorized webpage, using KNN and cosine distance.

Python - NLP Semantic Analysis: is a Natural Language Processing model that classifies a given sentence according to semantic similarity to other sentences, using cosine distance.

Python - NLP Word2Vec: is a model developed from scratch to measure cosine similarity among words.

Python - Reinforcement Learning: is a model based on simple rules and Game Theory where agents attitude change according to payoff achieved. Can be adapted for tit-for-tat strategy, always cooperate, always defeat and other strategies. Rewards were placed in the payoff matrix.

Python - Social Networks: is a model that draws social networks configuration and connections.

Python - Support Vector Machines: is a Machine Learning model that classifies the Iris dataset with SVM and plots it.

Python - Theano Deep Learning: is a Neural Network with two hidden layers using Theano.

Autoencoder for Audio is a model where I compressed an audio file and used Autoencoder to reconstruct the audio file, for use in phoneme classification.

Collaborative Filtering is a Recommender System where the algorithm predicts a movie review based on genre of movie and similarity among people who watched the same movie.

Convolutional NN Lasagne is a Convolutional Neural Network model in Lasagne to solve the MNIST task.

Ensembled Machine Learning is a .py file where 7 Machine Learning algorithms are used in a classification task with 3 classes and all possible hyperparameters of each algorithm are adjusted. Iris dataset of scikit-learn.

GAN Generative Adversarial are models of Generative Adversarial Neural Networks.

Hyperparameter Tuning RL is a model where hyperparameters of Neural Networks are adjusted via Reinforcement Learning. According to a reward, hyperparameter tuning (environment) is changed through a policy (mechanization of knowledge) using the Boston Dataset. Hyperparameters tuned are: learning rate, epochs, decay, momentum, number of hidden layers and nodes and initial weights.

Keras Regularization L2 is a Neural Network model for regression made with Keras where a L2 regularization was applied to prevent overfitting.

Lasagne Neural Nets Regression is a Neural Network model based in Theano and Lasagne, that makes a linear regression with a continuous target variable and reaches 99.4% accuracy. It uses the DadosTeseLogit.csv sample file.

Lasagne Neural Nets + Weights is a Neural Network model based in Theano and Lasagne, where is possible to visualize weights between X1 and X2 to hidden layer. Can also be adapted to visualize weights between hidden layer and output. It uses the DadosTeseLogit.csv sample file.

Multinomial Regression is a regression model where target variable has 3 classes.

Neural Networks for Regression shows multiple solutions for a regression problem, solved with sklearn, Keras, Theano and Lasagne. It uses the Boston dataset sample file from sklearn and reaches more than 98% accuracy.

NLP + Naive Bayes Classifier is a model where movie reviews were labeled as positive and negative and the algorithm then classifies a totally new set of reviews using Logistic Regression, Decision Trees and Naive Bayes, reaching an accuracy of 92%.

NLP Anger Analysis is a Doc2Vec model associated with Word2Vec model to analyze level of anger using synonyms in consumer complaints of a U.S. retailer in Facebook posts.

NLP Consumer Complaint is a model where Facebook posts of a U.S. computer retailer were scraped, tokenized, lemmatized and applied Word2Vec. After that, t-SNE and Latent Dirichlet Allocation were developed in order to classify the arguments and weights of each keyword used by a consumer in his complaint. The code also analyzes frequency of words in 100 posts.

NLP Convolutional Neural Network is a Convolutional Neural Network for Text in order to classify movie reviews.

NLP Doc2Vec is a Natural Language Procesing file where cosine similarity among phrases is measured through Doc2Vec.

NLP Document Classification is a code for Document Classification according to Latent Dirichlet Allocation.

NLP Facebook Analysis analyzes Facebook posts regarding Word Frequency and Topic Modelling using LDA.

NLP Facebook Scrap is a Python code for scraping data from Facebook.

NLP - Latent Dirichlet Allocation is a Natural Language Processing model where a Wikipedia page on Statistical Inference is classified regarding topics, using Latent Dirichlet Allocation with Gensim, NLTK, t-SNE and K-Means.

NLP Probabilistic ANN is a Natural Language Processing model where sentences are vectorized by Gensim and a probabilistic Neural Network model is deveoped using Gensim, for sentiment analysis.

NLP Semantic Doc2Vec + Neural Network is a model where positive and negative movie reviews were extracted and semantically classified with NLTK and BeautifulSoup, then labeled as positive or negative. Text was then used as an input for the Neural Network model training. After training, new sentences are entered in the Keras Neural Network model and then classified. It uses the zip file.

NLP Sentiment Positive is a model that identifies website content as positive, neutral or negative using BeautifulSoup and NLTK libraries, plotting the results.

NLP Twitter Analysis ID # is a model that extracts posts from Twitter based in ID of user or Hashtag.

NLP Twitter Scrap is a model that scraps Twitter data and shows the cleaned text as output.

NLP Twitter Streaming is a model of analysis of real-time data from Twitter (under development).

NLP Twitter Streaming Mood is a model where the evolution of mood Twitter posts is measured during a period of time.

NLP Wikipedia Summarization is a Python code that summarizes any given page in a few sentences.

NLP Word Frequency is a model that calculates the frequency of nouns, verbs, words in Facebook posts.

Probabilistic Neural Network is a Probabilistic Neural Network for Time Series Prediction.

REAL-TIME Twitter Analysis is a model where Twitter streaming is extracted, words and sentences tokenized, word embeddings were created, topic modeling was made and classified using K-Means. Then, NLTK SentimentAnalyzer was used to classify each sentence of the streaming into positive, neutral or negative. Accumulated sum was used to generate the plot and the code loops each 1 second, collecting new tweets.

RESNET-2 is a Deep Residual Neural Network.

ROC Curve Multiclass is a .py file where Naive Bayes was used to solve the IRIS Dataset task and ROC curve of different classes are plotted.

SQUEEZENET is a simplified version of the AlexNet.

Stacked Machine Learning is a .py notebook where t-SNE, Principal Components Analysis and Factor Analysis were applied to reduce dimensionality of data. Classification performances were measured after applying K-Means.

Support Vector Regression is a SVM model for non linear regression in an artificial dataset.

Text-to-Speech is a .py file where Python speaks any given text and saves it as an audio .wav file.

Time Series ARIMA is a ARIMA model to forecast time series, with an error margin of 0.2%.

Time Series Prediction with Neural Networks - Keras is a Neural Network model to forecast time series, using Keras with an adaptive learning rate depending upon derivative of loss.

Variational Autoencoder is a VAE made with Keras.

Web Crawler is a code that scraps data from different URLs of a hotel website.

t-SNE Dimensionality Reduction is a t-SNE model for dimensionality reduction which is compared to Principal Components Analysis regarding its discriminatory power.

t-SNE PCA + Neural Networks is a model that compares performance or Neural Networks made after t-SNE, PCA and K-Means.

t-SNE PCA LDA embeddings is a model where t-SNE, Principal Components Analysis, Linear Discriminant Analysis and Random Forest embeddings are compared in a task to classify clusters of similar digits.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%