-
-
Notifications
You must be signed in to change notification settings - Fork 238
Feat/replace random forest #1246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, just some docstrings and questions :)
|
||
@staticmethod | ||
def get_intensifier( # type: ignore | ||
scenario: Scenario, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the intensifier and initial design are just copy-pasted from the MF facade right? Nothing has changed here?
n_instances: int, | ||
lock: threading.Lock, | ||
) -> None: | ||
"""Collect predictions from a single estimator. However, we sum the results from all instances""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add arg descriptions
Validate X whenever one tries to predict, apply, predict_proba. | ||
It is based on rf regressor from sklearn 1.6.1: | ||
https://github.com/scikit-learn/scikit-learn/blob/99bf3d8e4eed5ba5db19a1869482a238b6223ffd/sklearn/ensemble/_forest.py#L629 | ||
However, we add another parameter to allow the model to ignore feature checking (this is done after |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why would we want to ignore feature checking? add to docstring
cross_trees_variance: bool = False, | ||
criterion: str = "squared_error", | ||
splitter: str = "random", # Should not be changed | ||
max_depth: int = 2**20, # TODO: HP: None is default in sklearn, here it's 2**20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rewrite comments, afaik there is no TODO. these were the HPs of the previous RF, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in all cases, keep the information somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are some comments left to address
Replace pyrfr with sklearn's RF