Problem Statement as being an information scientist when it comes to marketing division at reddit.

Problem Statement as being an information scientist when it comes to marketing division at reddit.

i must discover the most predictive key words and/or phrases to accurately classify the the dating advice and relationship advice subreddit pages so we may use them to ascertain which ads should populate for each web page. Because this is a category issue, we’ll make use of Logistic Regression & Bayes models. Misclassifications in this situation will be fairly safe thus I will utilize the precision score and set up a baseline of 63.3per cent to price success. Making use of TFiDfVectorization, I’ll get the function value to find out which terms have actually the prediction power that is highest for the prospective factors. If effective, this model may be used to focus on other pages which have comparable regularity for the exact same terms and expressions.

Data Collection

See dating-advice-scrape and relationship-advice-scrape notebooks with this component.

After turning all of the scrapes into DataFrames, we stored them as csvs that you can get into the dataset folder for this repo.

Information Cleaning and EDA

  • dropped rows with null self text line asian dating becuase those rows are worthless for me.
  • combined name and selftext column directly into one brand new all_text columns
  • exambined distributions of term counts for games and selftext column per post and contrasted the 2 subreddit pages.

Preprocessing and Modeling

Found the baseline precision rating 0.633 which means that if i usually select the value occurring oftentimes, i will be appropriate 63.3% of that time period.

First effort: logistic regression model with default CountVectorizer paramaters. train rating: 99 | test 75 | cross val 74 Second attempt: tried CountVectorizer with Stemmatizer preprocessing on first set of scraping, pretty bad rating with a high variance. Train 99%, test 72%

  • attempted to decrease maximum features and rating got a whole lot worse
  • tried with lemmatizer preprocessing instead and test score went as much as 74percent

Merely enhancing the information and y that is stratifying my test/train/split increased my cvec test score to 81 and cross val to 80. Including 2 paramaters to my CountVectorizers helped a lot. A min_df of 3 and ngram_range of (1,2) increased my test score to 83.2 and cross val to 82.3 But, these rating disappeared.

I believe Tfidf worked the greatest to reduce my overfitting due to variance issue because

we customized the end terms to simply take the ones away which were really too regular to be predictive. This is a success, nonetheless, with increased time we most likely could’ve tweaked them a little more to boost all ratings. Evaluating both the solitary terms and terms in sets of two (bigrams) ended up being the most readily useful param that gridsearch proposed, nevertheless, every one of my top many predictive terms wound up being uni-grams. My initial range of features had a good amount of jibberish terms and typos. Minimizing the # of that time period an expressed term had been necessary to show as much as 2, helped be rid of these. Gridsearch additionally advised 90% max df rate which assisted to get rid of oversaturated terms too. Finally, setting max features to 5000 reduced cut down my columns to about 25 % of whatever they had been to only concentrate probably the most commonly used terms of that which was kept.

Summary and tips

Also though i’d like to have greater train and test ratings, I became in a position to successfully reduce the variance and you can find certainly a few terms which have high predictive energy

therefore I think the model is willing to introduce a test. The same key words could be used to find other potentially lucrative pages if advertising engagement increases. It was found by me interesting that taking right out the overly used words aided with overfitting, but brought the precision rating down. I do believe there is certainly probably nevertheless space to relax and play around with the paramaters associated with the Tfidf Vectorizer to see if various end terms create an or that is different

About

Used Reddit’s API, demands collection, and BeautifulSoup to clean articles from two subreddits: Dating guidance & Relationship guidance, and trained a binary category model to predict which subreddit confirmed post originated in