1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > NFL GamePass广告投放策略分析

NFL GamePass广告投放策略分析

时间:2023-11-16 07:33:47

相关推荐

NFL GamePass广告投放策略分析

0 概论

NFL以Game Pass产品作为主要营销方法(Pat Even,),在欧洲的观众人数增加。但是,上一季的Game Pass广告策略仍存在问题。在市场方面,一些国家获得了更多投资,但投资回报率非常有限,而其他国家则相反。为了解决这个问题,我选择英国市场进行分析并给出相应的解决方案,以达到扩大市场和提高投资回报率的效果。

0.1 构成

为了解决这个问题,本篇文章分为6个部分。

第1部分是问题的提出以及GamePass的业务概述。

第2部分是MySQL的数据导入与连接工作。

第3部分是Notebook上数据清洗工作。

第4部分是在Tableau上对GamePass的广告投放表现做分析,以及指出英国市场的问题。

第5部分是针对英国市场的广告投放所影响的用户消费具体数据,进行特征工程处理,并通过处理过的特征,进行简单的机器学习预测。

第6部分则是对用户进行细分,并针对不同群体的购买金额、购买次数对比,提出相应的简单营销策略。

0.2 注明

(本次项目数据来自NFL和TwoCircles。由于NFL规定,本次项目的数据不公开,望谅解)

本次项目在maxaishaojun的Github上可以浏览(注意Github移动端浏览可能效果不佳)。

"4 问题分析"部分可移步到Tableau Dashboard上浏览 NFL GamePass -19赛季广告投放问题分析。

0.3 引用

Pat, E. () ‘NFL Viewership Growth Throughout Europe Exposes Opportunities in the US’, Front Office Sport, 7 March. Available at: /two-circles… (Accessed: 6 April )

1 项目背景

(注:本部分文档来源 - Two Circles)

1.1 问题提出

假设现在是/19 NFL赛季的结束,你是NFL Game Pass Europe营销团队的成员。 你的任务是分析广告投放在获取新的NFL Game Pass订阅者方面的有效性,通过部署的广告渠道挖掘购买者的客户档案,并为下一季的数字广告活动提供建议,涵盖广告素材,受众群体,地理位置定位 ,渠道组合和预算。

1.2 项目概述

NFL Game Pass是国外NFL球迷的首选OTT订阅产品,也是NFL国际业务的重中之重。与“NFL的Netflix”相似,粉丝可以访问每个实时游戏,游戏回放和精彩集锦,NFL RedZone,NFL网络24/7直播电视频道,原创内容,节目和游戏档案,下载等等。

近年来,NFL在欧洲的人气越来越高,这已反映在非洲大陆的NFL Game Pass的用户数量上。保持这种用户增长水平需要采取策略来最小化用户流失,优化用户获取并增加尚未准备好转换的粉丝群体的考虑。

NFL Game Pass Europe的订户收购营销活动中最大的组成部分是数字广告,每季占收购订户的约46%。该活动通常在NFL季前赛(8月)开始,并在NFL赛季(9月)的前4周达到峰值强度,最终在超级碗(2月初)之后结束。该活动中部署了各种付费营销渠道,主要是Google广告(搜索/ PPC),Facebook和YouTube(社交)以及各种程序化展示广告合作伙伴(展示广告)。此外,展示广告在和NFL免费拥有的其他数字资产(O&O)上发布。预算,广告创意和受众群体定位策略因渠道而异,在整个广告系列中获得平衡对其成功至关重要。

NFL Game Pass Europe刚刚完成了/19赛季的数字广告活动,并正在进入/20赛季的规划过程。通过访问丰富的数据集,可以分析广告效果以及购买者人口统计数据,NFL Game Pass Europe正在寻求挖掘数据集,以获取有助于未来数字广告策略的见解。

1.3 附录 /19 NFL Game Pass Europe 产品定义

/19 Audience Strategy and Definitions

/19 Market Strategy and Definitions

/19 Example Ad Units

/19 Marketing Plan and Promotions

2 数据导入

-- 1 在MediaPerformanceData中增加numweek与revenueselect a.date, a.nflweek, a.numweek, a.platform, a.market, a.audience, round(a.`Spend (GBP)`*1.3, 2) as spend_usd, a.impressions, a.clicks, a.transactions, b.revenue_usdfrom (select *,@number := casewhen @weeks = nflweek then @numberelse @number+1end as numweek,@weeks := nflweek as weeksfrom mediaperformancedatajoin (select @weeks := null, @number := 0) as variableorder by date) ajoin (select mt.date, mt.nflweek, mt.platform, mt.market, mt.audience, round(sum(`Revenue (USD)`),2) as revenue_usdfrom MediaTransactionsdata mtjoin Subscriptionsdata s on mt.transactionID = s.transactionidgroup bymt.date, mt.nflweek, mt.platform, mt.market, mt.audienceorder by mt.date, mt.platform, mt.market, mt.audience) bon (a.date = b.date and a.nflweek = b.nflweek and a.platform = b.platform and a.market = b.market and a.audience = b.audience);-- 2 连接剩下三表select mt.transactionid, s.customerid, mt.date, mt.nflweek, mt.platform, mt.market, mt.audience,c.`NFL Game Pass Segment`, c.gender, c.age, c.`NFL Tickets`, c.`NFL Shop`, c.`NFL Fantasy`, c.`New To NFL Database`, c.`Email Opt-In`, c.`Favourite Team`, s.SKU, s.`Buy Type`, s.`Converted Free Trial`, s.`Revenue (USD)`fromSubscriptionsdata sleft joinmediatransactionsdata mt on s.transactionid = mt.transactionidleft join customersdata c on s.customerid = c.customeridorder bymt.date;复制代码

3 数据清洗

import pandas as pdimport numpy as npmp_file_path = '/Users/apple/Downloads/nfl/mediaperformance.csv'nfl_file_path = '/Users/apple/Downloads/nfl/nfladverts.csv'mp_data = pd.read_csv(mp_file_path)nfl_data = pd.read_csv(nfl_file_path)# 重新命名列nfl_data.columns = ['transaction_id', 'customer_id', 'date', 'nflweek', 'platform', 'market','audience', 'segment', 'gender', 'age_group', 'tickets','shop', 'fantasy', 'new_to_database', 'email_opt_in','favourite_team', 'sku', 'buy_type', 'converted_free_trial','revenue_usd']nfl_data['revenue_usd'].describe()# mp_data.columns# 重复值mp_unique = mp_data.groupby(['date', 'nflweek', 'numweek', 'platform', 'market', 'audience']).size().reset_index(name='Freq')mp_unique = mp_unique.sort_values(by=['Freq'], ascending=False)print(mp_unique)nfl_data.head()nfl_unique = nfl_data.groupby(['transaction_id']).size().reset_index(name='Freq')nfl_unique = nfl_unique.sort_values(by=['Freq'], ascending=False)print(nfl_unique)# data.drop_duplicates(subset ="columns Name", keep = False, inplace = True) # 缺失值mp_null_total = mp_data.isnull().sum(axis=0).sort_values(ascending=False)mp_null_percent = (mp_data.isnull().sum()/len(mp_data.index)).sort_values(ascending=False).round(3)mp_missing_data = pd.concat([mp_null_total, mp_null_percent], axis=1, keys=['Total', 'Percent'])nfl_null_total = nfl_data.isnull().sum(axis=0).sort_values(ascending=False)nfl_null_percent = (nfl_data.isnull().sum()/len(nfl_data.index)).sort_values(ascending=False).round(3)nfl_missing_data = pd.concat([nfl_null_total, nfl_null_percent], axis=1, keys=['Total', 'Percent'])# 去掉没有transaction_id的行nfl_data = nfl_data.dropna(subset=['transaction_id'])# 没有填喜好球队的编为‘No Team’nfl_data['favourite_team'] = nfl_data['favourite_team'].fillna('No Team')# 没填性别编为'U'nfl_data['gender'] = nfl_data['gender'].fillna('U')# 由于gender缺失太多,和其他列也没有存在明显的逻辑关系,尝试看有没有男女id差异,结果没有# /pandas-docs/stable/reference/api/pandas.pivot_table.htmlgender_predict = nfl_data[nfl_data.gender.notnull()]gender_pivot = pd.pivot_table(gender_predict, values='customer_id', index=['gender'], aggfunc=np.mean)# 按照逻辑填充buy_type# nfl_data['buy_type'].unique()nfl_data['buy_type'] = nfl_data.apply(lambda row: 'Buy Now' if row['sku'] =='Free' else ('Buy Now' if (row['converted_free_trial'] == 0 and row['revenue_usd'] > 0) else ('Free Trial' if (row['converted_free_trial'] == 0 and row['revenue_usd'] == 0) or (row['converted_free_trial'] == 1 and row['revenue_usd'] > 0) else row['buy_type'])), axis=1)# nfl_data.isnull().sum(axis=0).sort_values(ascending=False)# 异常值# binary都变成0&1,方便计算;并去掉异常值# nfl_data['market'].unique()nfl_data['tickets'] = np.where(nfl_data['tickets'] == 'N', 0, 1)nfl_data['shop'] = np.where(nfl_data['shop'] == 'N', 0, 1)nfl_data['fantasy'] = np.where(nfl_data['fantasy'] == False, 0, 1)nfl_data['new_to_database'] = np.where(nfl_data['new_to_database'] == 'N', 0, 1)nfl_data['email_opt_in'] = np.where(nfl_data['email_opt_in'] == 'False', 0, np.where(nfl_data['email_opt_in'] == '0', 0, 1))nfl_data['buy_type'] = np.where(nfl_data['buy_type'] == 'Free Trial', 0, 1)nfl_data['favourite_team'] = np.where(nfl_data['favourite_team'] == 'NFL', 'No Team', nfl_data['favourite_team'])nfl_data['favourite_team'] = np.where(nfl_data['favourite_team'] == 'No Team', 0, 1)# 纠正sku、buy_type、converted_free_trial的逻辑错误# /questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o# /t/how-to-resolve-python-error-cannot-compare-a-dtyped-int64-array-with-a-scalar-of-type-bool/73065nfl_data['sku'] = np.where((nfl_data['revenue_usd'] == 0) & (nfl_data['sku'] != 'Pro'), 'Free', nfl_data['sku'])nfl_data['sku'] = np.where((nfl_data['revenue_usd'] > 0) & (nfl_data['sku'] == 'Free'), 'Pro', nfl_data['sku'])nfl_data['buy_type'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['revenue_usd'] == 0), 0, nfl_data['buy_type'])nfl_data['converted_free_trial'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['buy_type'] == 0) & (nfl_data['revenue_usd'] > 0), 1, nfl_data['converted_free_trial'])nfl_data['converted_free_trial'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['buy_type'] == 1) & (nfl_data['revenue_usd'] > 0), 0, nfl_data['converted_free_trial'])mp_data.to_excel(r'/Users/apple/Downloads/nfl/mp_data.xlsx', index=True)nfl_data.to_csv(r'/Users/apple/Downloads/nfl/nfl_data.csv', index=True)复制代码

4 问题分析

问题分析部分可移步到Tableau Dashboard上浏览 NFL GamePass -19赛季广告投放问题分析。

5 特征工程

import pandas as pdfrom pandas import DataFrameimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsuser_file_path = '/Users/apple/Downloads/nfl/nfl_data.csv'user_data = pd.read_csv(user_file_path)# extract UK marketuk_data = user_data.loc[user_data['market'] == 'UK']# uk_data['revenue_usd'].describe()# user_data.customer_id = user_data.customer_id.astype(str)uk_data = uk_data.drop(['Unnamed: 0', 'date', 'customer_id', 'audience', 'market', 'nflweek', 'platform'], axis=1)# uk_data.head()# 看消费的分布情况uk_data.revenue_usd.hist(bins=20, alpha=0.5)plt.title("Game Pass Europe Revenue Distribution")plt.xlabel("Revenue($)")plt.ylabel("Frequency")复制代码

5.1 数据预处理

# 量纲的问题# 这里的data都是 yes/no 或者是category的数据,除了revenue_usd之外,没有continuous data# 规范化处理(norm)对数据异常值很敏感,处理之后数据中的异常值会消失,因此如果数据集中存在异常值,则这是一种不好的做法。 # 而标准化不受数据限制,所以一般我们采用标准化来处理数据。def normalize(data, column):for col in column:data['normalize_'+col] = (data[col] - np.min(data[col])) / (np.max(data[col]) - np.min(data[col]))return datadef standardize(data, column):for col in column:data['standardize_'+col] = (data[col] - np.mean(data[col])) / (np.std(data[col]))return datacolumns = ['revenue_usd']uk_data = standardize(uk_data, columns)# 再介绍几种激活函数,其本意都是把数值压缩在某个区间,其中有的区间敏感,有的不敏感def tanh(data, column): for col in column:data['tanh_'+col] = np.tanh(data[col])return data def sigmoid(data, column):for col in column:data['sigmoid_'+col] = 1.0 / (1.0 + (np.exp(data[col])*(-1)))return datadef leakyrelu(data, column, a=1): for col in column:data['leakyrelu_'+col] = np.array([x if x > 0 else a * x for x in data[col]])return datadef softplus(data, column):for col in column:data['softplus_'+col] = np.log(np.exp(data[col]) + 1)return datauk_data.standardize_revenue_usd.hist(bins=20, alpha=0.5)plt.title("Game Pass Europe Revenue Distribution")plt.xlabel("Revenue($)")plt.ylabel("Frequency")# 特征值类型的问题# category编码 - dummy coding# 这种虚拟变量的做法容易增加数据特征的维度dummy_data = pd.get_dummies(uk_data,columns=['segment', 'gender', 'age_group', 'sku'],prefix=['segment', 'gender', 'age_group', 'sku'],prefix_sep="_")# uk_data = uk_data.drop(['gender_U'], axis=1)# 连续值转为categoryconditions = [(uk_data['revenue_usd'] == 0),(uk_data['revenue_usd'] <= 13),(uk_data['revenue_usd'] <= 98)]choices = [0,1,2]dummy_data['revenue_category'] = np.select(conditions, choices, default= 3)uk_data['revenue_category'] = np.select(conditions, choices, default= 3)fig, ax = plt.subplots()dummy_data['revenue_category'].value_counts().plot(ax=ax, kind='bar')复制代码

5.2 特征选择

# dummy_data.columnsselect_feature = ['tickets', 'shop', 'fantasy', 'new_to_database', 'email_opt_in','favourite_team', 'buy_type', 'converted_free_trial', 'segment_Acq','segment_Ret', 'segment_iOS', 'gender_F', 'gender_M', 'gender_U','age_group_18-21', 'age_group_22-25', 'age_group_26-30','age_group_31-35', 'age_group_36-40', 'age_group_41-50','age_group_51-60', 'age_group_60+', 'age_group_Under 18','age_group_Unknown', 'sku_Essential', 'sku_Free', 'sku_Playoffs','sku_Pro', 'sku_Super Bowl', 'sku_Weekly']# 方差选择法from sklearn.feature_selection import VarianceThresholdvarianceThreshold = VarianceThreshold(threshold = 0.2)varianceThreshold.fit_transform(dummy_data[select_feature])var_result = varianceThreshold.get_support()# 相关系数法# 选择基本的feature,然后匹配其他# 注意逻辑关系from sklearn.feature_selection import SelectKBestfrom sklearn.feature_selection import f_regressionselectKBest = SelectKBest(f_regression, k=10)feature = dummy_data[select_feature]bestFeature = selectKBest.fit_transform(feature, dummy_data[['revenue_usd']])feature_result = selectKBest.get_support()def feature_results(list_feature, list_result):dic = {}for i in range(len(list_feature)):feature = list_feature[i]result = list_result[i]dic[feature] = resultresult_tuple = sorted(dic.items(), key=lambda kv: kv[1])return result_tuplevar_results = feature_results(select_feature, var_result)weights_results = feature_results(select_feature, feature_result)print(var_results[-5:-1])print(weights_results[-10:-1])复制代码

5.3 维度下降

# 逻辑降维# 根据发散性和关联性的结果,对愿特征进行处理# 合并tickets、shop、fantasy、new_to_database、email_opt_in,以提高相关性# 如果可以的话应该加上权重# uk_data.sku.unique()uk_data['user_behaviour'] = uk_data['tickets'] + uk_data['shop'] + uk_data['fantasy'] + uk_data['new_to_database'] + uk_data['email_opt_in'] # age_group 分成老中青 0-21 22-40 40+uk_data['age'] = np.where((uk_data['age_group']=='Under 18') | (uk_data['age_group']=='18-21'), 'young', np.where((uk_data['age_group']=='22-25') | (uk_data['age_group']=='26-30') | (uk_data['age_group']=='31-35') | (uk_data['age_group']=='36-40'), 'adult','old'))# sku, buy_ype & converted_free_trial 是属于购买行为了,应该归在target_variable中uk_data['sku_category'] = np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 1), 'Pro-BuyNow', np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 0) & (uk_data['converted_free_trial']== 0), 'Pro-FreeTrial-NoConvert',np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 0) & (uk_data['converted_free_trial']== 1), 'Pro-FreeTrial-Convert',uk_data['sku'])))uk_data = uk_data.drop(['tickets', 'shop', 'fantasy', 'new_to_database', 'email_opt_in', 'sku', 'buy_type', 'converted_free_trial', 'age_group', 'standardize_revenue_usd'], axis=1)uk_data = uk_data[['transaction_id', 'age','gender', 'favourite_team', 'user_behaviour', 'segment', 'revenue_usd', 'sku_category', 'revenue_category']]uk_data.head(10)# 哑编码,以适应某些模型不接受categorical datadummy_data = pd.get_dummies(uk_data,columns=['age', 'gender', 'segment'],prefix=['age', 'gender', 'segment'],prefix_sep="_")# 将哑编码与原数据结合feature_data = pd.merge(dummy_data, uk_data[['transaction_id', 'age', 'gender', 'segment']], on='transaction_id', how='inner')feature_data.head()# 透视表"""feature_data['transaction_count'] = 1pd.pivot_table(feature_data,columns=["age"],index = ['favourite_team'],values=['revenue_usd', 'transaction_count'],aggfunc=[np.mean,np.sum])"""# 相关性可视化"""variables = ['favourite_team', 'user_behaviour', 'revenue_usd', 'age_adult', 'age_old', 'age_young','gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS']sns.set()sns.pairplot(feature_data[variables], size = 2.5)plt.show()"""# 多重共线性检验"""x = feature_data[['favourite_team', 'user_behaviour', 'age_adult', 'age_old', 'age_young','gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS']]from statsmodels.stats.outliers_influence import variance_inflation_factorvif = pd.DataFrame()vif["VIF Factor"] = [variance_inflation_factor(x.values, i) for i in range(x.shape[1])]vif["features"] = x.columnsvif.round(1)feature_data.to_csv(r'/Users/apple/Downloads/nfl/feature_data.csv', index=True)"""# 算法试验# 检验方法-train-validationfrom sklearn.model_selection import train_test_splitx = feature_data[['favourite_team', 'user_behaviour', 'age_adult', 'age_old', 'age_young','gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS']]y1 = feature_data[['revenue_usd']] #regressiony2 = feature_data[['revenue_category']] #classificationx_train1, x_val1, y_train1, y_val1 = train_test_split(x, y1, test_size=0.2, random_state=1)x_train2, x_val2, y_train2, y_val2 = train_test_split(x, y2, test_size=0.2, random_state=1)print("the number of data for training:")print(y_train1.count())print("the number of data for validation:")print(y_val1.count())#衡量方法-accuracy与RMSEfrom sklearn.metrics import mean_squared_errordef rmse_model(model, x, y):predictions = model.predict(x)rmse = np.sqrt(mean_squared_error(predictions, y))return rmse"""from sklearn import metricsdef confusion_matrix(model, x, y):model_confusion_test = metrics.confusion_matrix(y, model.predict(x))matrix = pd.DataFrame(data = model_confusion_test, columns = ['Predicted 0', 'Predicted 1', 'Predicted 2', 'Predicted 3'],index = ['Predicted 0', 'Predicted 1', 'Predicted 2', 'Predicted 3'])return matrix"""# regression# 由于运算量和时间问题,就只放上代码了"""from sklearn.linear_model import LinearRegressionlinear_regression = LinearRegression()linear_regression.fit(x_train1, y_train1)print(rmse_model(linear_regression, x_val1, y_val1))"""# bias-variance trade-off"""from sklearn.preprocessing import PolynomialFeaturestrain_rmses = []val_rmses = []degrees = range(1,8)for i in degrees:poly = PolynomialFeatures(degree=i, include_bias=False)x_train_poly = poly.fit_transform(x_train1)poly_reg = LinearRegression()poly_reg.fit(x_train_poly, y_train1)# training RMSEy_train_pred = poly_reg.predict(x_train_poly)train_poly_rmse = np.sqrt(mean_squared_error(y_train1, y_train_pred))train_rmses.append(train_poly_rmse)# validation RMSEx_val_poly = poly.fit_transform(x_val1)y_val_pred = poly_reg.predict(x_val_poly)val_poly_rmse = np.sqrt(mean_squared_error(y_val1, y_val_pred))val_rmses.append(val_poly_rmse)print('degree = %s, training RMSE = %.2f, validation RMSE = %.2f' % (i, train_poly_rmse, val_poly_rmse))fig = plt.figure()ax = fig.add_subplot(111)ax.plot(degrees, train_rmses,label= 'training set')ax.plot(degrees, val_rmses,label= 'validation set')ax.set_yscale('log')ax.set_xlabel('Degree')ax.set_ylabel('RMSE')ax.set_title('Bias/Variance Trade-off') plt.legend()plt.show()"""# regularization in order to reduce the effect of overfitting# ridge (lasso, elasticnet代码相似,lasso和elasticnet对RMSE压缩得更狠一些,不过不会去掉有collinearity的feature,ridge相反)"""from sklearn.linear_model import Ridgefrom sklearn.pipeline import make_pipelinermse=[]alpha=[1, 2, 5, 10, 20, 30, 40, 50, 75, 100]for a in alpha:ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=a))ridge.fit(x_train1, y_train1)predict=ridge.predict(x_val1)rmse.append(np.sqrt(mean_squared_error(predict, y_val1)))print(rmse)plt.scatter(alpha, rmse)alpha=np.arange(20, 60, 2)rmse=[]for a in alpha:#ridge=Ridge(alpha=a, copy_X=True, fit_intercept=True)#ridge.fit(x_train1, y_train1)ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=a))ridge.fit(x_train1, y_train1)predict=ridge.predict(x_val1)rmse.append(np.sqrt(mean_squared_error(predict, y_val1)))print(rmse)plt.scatter(alpha, rmse)ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=24.6))ridge_model = ridge.fit(x_train1, y_train1)predictions = ridge_model.predict(x_val1)print("Ridge RMSE is: " + str(rmse_model(ridge_model, x_val1, y_val1)))"""# classficationlist(y_train2['revenue_category'].unique())# decision treefrom sklearn import treefrom sklearn.tree import DecisionTreeClassifierdecision_tree_model = DecisionTreeClassifier(criterion='entropy')decision_tree_model.fit(x_train2, y_train2)print(decision_tree_model.score(x_train2,y_train2))print(decision_tree_model.score(x_val2,y_val2))# tuningtrain_score = []val_score = []for depth in np.arange(1,20):decision_tree = tree.DecisionTreeClassifier(max_depth = depth,min_samples_leaf = 5)decision_tree.fit(x_train2, y_train2)train_score.append(decision_tree.score(x_train2, y_train2))val_score.append(decision_tree.score(x_val2, y_val2))plt.plot(np.arange(1,20),train_score)plt.plot(np.arange(1,20),val_score)plt.legend(['Training Accuracy','Validation Accuracy'])plt.title('Decision Tree Tuning')plt.xlabel('Depth')plt.ylabel('Accuracy')train_score = []val_score = []for leaf in np.arange(20,100):decision_tree = tree.DecisionTreeClassifier(max_depth = 10, min_samples_leaf = leaf)decision_tree.fit(x_train2, y_train2)train_score.append(decision_tree.score(x_train2, y_train2))val_score.append(decision_tree.score(x_val2, y_val2))plt.plot(np.arange(20,100),train_score)plt.plot(np.arange(20,100),val_score)plt.legend(['Training Accuracy','Validation Accuracy'])plt.title('Decision Tree Tuning')plt.xlabel('Minimum Samples Leaf')plt.ylabel('Accuracy')my_decision_tree_model = DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20)my_decision_tree_model.fit(x_train2, y_train2)print(my_decision_tree_model.score(x_train2,y_train2))print(my_decision_tree_model.score(x_val2,y_val2))# confusion matrixfrom sklearn.metrics import accuracy_score, confusion_matrix, precision_recall_fscore_supporty_predict = my_decision_tree_model.predict(x_val2)cm = confusion_matrix(y_val2, y_predict) # Transform to df for easier plottingcm_df = pd.DataFrame(cm,index = ['free', 'median', 'high', 'low'], columns = ['free', 'median', 'high', 'low'])plt.figure(figsize=(5.5,4))sns.heatmap(cm_df, annot=True)plt.title('Decision Tree \nAccuracy:{0:.3f}'.format(accuracy_score(y_val2, y_predict)))plt.ylabel('True label')plt.xlabel('Predicted label')plt.show()# learning curvefrom sklearn.model_selection import learning_curvetrain_sizes, train_scores, val_scores = learning_curve(DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20), x, y2,# Number of folds in cross-validationcv=5,# Evaluation metricscoring='accuracy',# Use all computer coresn_jobs=-1, # 50 different sizes of the training settrain_sizes=np.linspace(0.1, 1.0, 5))# Create means and standard deviations of training set scorestrain_mean = np.mean(train_scores, axis=1)train_std = np.std(train_scores, axis=1)# Create means and standard deviations of validation set scoresval_mean = np.mean(val_scores, axis=1)val_std = np.std(val_scores, axis=1)# Draw linesplt.plot(train_sizes, train_mean, '--', color="#ff8040", label="Training score")plt.plot(train_sizes, val_mean, color="#40bfff", label="Cross-validation score")# Draw bandsplt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, color="#DDDDDD")plt.fill_between(train_sizes, val_mean - val_std, val_mean + val_std, color="#DDDDDD")# Create plotplt.title("Learning Curve \n k-fold=5, number of neighbours=5")plt.xlabel("Training Set Size"), plt.ylabel("Accuracy Score"), plt.legend(loc="best")plt.tight_layout()plt.show()# Curse of Dimensionalityd_train = []d_val = []for i in range(1,9):X_train_index = x_train2.iloc[: , 0:i]X_val_index = x_val2.iloc[: , 0:i]classifier = DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20)dt_model = classifier.fit(X_train_index, y_train2.values.ravel())d_train.append(dt_model.score(X_train_index, y_train2))d_val.append(dt_model.score(X_val_index, y_val2))plt.title('Decision Tree Curse of Dimensionality')plt.plot(range(1,9),d_val,label="Validation")plt.plot(range(1,9),d_train,label="Train")plt.xlabel('Number of Features')plt.ylabel('Score (Accuracy)')plt.legend()plt.xticks(range(1,9))plt.show()# 预测结果很差,可能和feature选择有关系,也有可能和算法不适合有关系。复制代码

6 用户细分及建议

6.1 用户细分

通过特征工程后,我们这一步的工作就是将我们的用户细分,并根据不同的用户群体提出不同的策略。在通过Excel PivotTable & 可视化操作后,以喜欢球队、NFL消费行为、用户segment、年龄、性别为比较维度,购买金额(平均)、购买次数为目标维度,构成矩阵。

6.2 Marketing建议

基于英国在Game Pass欧洲市场战略的地位,其目的应该是让越来越多的人参与到NFL中的同时,提高ROI。因此,在不考虑Budget的情况下,应该想办法改进四个群体的消费额与用户体验。

¶ 对于明星用户(中老年男性,有喜欢的球队),应该保持和提高用户的产品体验,并组织球迷活动,增进关系,提高消费意愿。

¶ 对于便宜套餐用户(男性新用户,没有喜欢的球队),其对GamePass还处在观望的状态,或者归属感不强影响消费意愿(喜欢看比赛,但是更多的是寻找免费套餐)。这个时候可以在线上线下举行增进球迷关系的活动,或者在赛季期间搞价格促销活动,提高购买意愿。另一方面,其购买能力也有可能影响消费金额,不过可以通过广告从而为GamePass间接获取流量收入。

¶ 对于“只花一次大价钱”用户,首先明确他们的消费能力应该是没问题的。应该举行调研,为什么愿意花大价钱,但是为什么只消费了一次?如何提高这部分用户对产品的体验从而促使消费次数增加?

¶ 而对于瘦狗用户(中老年女性or青年,没有喜欢球队),首先要做的是NFL对于这部分群体的普及营销工作(如果不喜欢NFL,怎么会用GamePass呢),包括推出关于女性、青少年的NFL宣传广告(提高印象)、举行多次推广活动(提高认知,并鼓励用户参与)、以及公关活动,以提升这部分群体对NFL的功能认知和道德认知。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。