大家好,我是东哥。
),没想到最近又发现了一个更惊艳的,而且更逼真,话不多说,先看效果图↓
绘制随机森林
也不在话下pybaobabdt
安装GraphViz
pybaobabdt
依赖GraphViz,首先下载安装包安装pygraphviz和pybaobabdt
pip install pybaobabdt
即可pybaobabdt用法
pybaobabdt
用起来也简单到离谱,核心命令只有一个pybaobabdt.drawTree
,下面是官方文档示例代码,建议在jupyter-notebook中运行。import pybaobabdt
import pandas as pd
from scipy.io import arff
from sklearn.tree import DecisionTreeClassifier
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.colors import ListedColormap
from colour import Color
import matplotlib.pyplot as plt
import numpy as np
data = arff.loadarff(‘vehicle.arff’)
df = pd.DataFrame(data[0])
y = list(df[‘class’])
features = list(df.columns)
features.remove(‘class’)
X = df.loc[:, features]
clf = DecisionTreeClassifier().fit(X, y)
ax = pybaobabdt.drawTree(clf, size=10, dpi=72, features=features, colormap=‘Spectral’)
不同的颜色对应不同的分类(target),每个分叉处都标记了分裂的条件,所以划分逻辑一目了然。树的深度也是工整的体现了出来。
树枝的直径也不是摆设,而是代表了样本的个数(比例),该划分条件下的样本越多,树干也就越粗。
你是发现最最底层的树枝太细太脆弱的时候,是不是应该考虑一下过拟合风险,比如需要调整一下最小样本数?
绘制随机森林
import pybaobabdt
import pandas as pd
from scipy.io import arff
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
data = arff.loadarff('vehicle.arff')
df = pd.DataFrame(data[0])
y = list(df[‘class’])
features = list(df.columns)
features.remove(‘class’)
X = df.loc[:, features]
clf = RandomForestClassifier(n_estimators=20, n_jobs=-1, random_state=0)
clf.fit(X, y)
size = (15,15)
plt.rcParams[‘figure.figsize’] = size
fig = plt.figure(figsize=size, dpi=300)
for idx, tree in enumerate(clf.estimators_):
ax1 = fig.add_subplot(5, 4, idx+1)
pybaobabdt.drawTree(tree, model=clf, size=15, dpi=300, features=features, ax=ax1)
fig.savefig(‘random-forest.png’, format=‘png’, dpi=300, transparent=True)