Skip to content

Conversation

Logende
Copy link
Contributor

@Logende Logende commented Dec 23, 2022

No description provided.

@rht
Copy link
Contributor

rht commented Dec 24, 2022

I tried to lint the README.md via https://vale.sh/, as an experiment. I found that it said "cachable" is not a thing. According to mikebronner/laravel-model-caching#159, "cacheable" occurs ~9x more often than "cachable". And so, I think we should use "cacheable".

As a side note, I think we should have a prose linter for the repo. Haven't decided yet which one to use. Contenders: https://github.com/errata-ai/vale#functionality.

@Corvince
Copy link
Contributor

So cool to see this. It is always nice to see libraries coming up to populate the mesa ecosystem! I am excited to take a closer look at the library and am positive about this PR being mergeable.

@rht rht closed this Dec 24, 2022
@rht rht reopened this Dec 24, 2022
@rht
Copy link
Contributor

rht commented Dec 24, 2022

Pressed the wrong button. Wanted to say that one more use case would be to resume execution of a simulation run at a checkpoint.

@Logende
Copy link
Contributor Author

Logende commented Dec 24, 2022

I tried to lint the README.md via https://vale.sh/, as an experiment. I found that it said "cachable" is not a thing. According to GeneaLabs/laravel-model-caching#159, "cacheable" occurs ~9x more often than "cachable". And so, I think we should use "cacheable".

As a side note, I think we should have a prose linter for the repo. Haven't decided yet which one to use. Contenders: https://github.com/errata-ai/vale#functionality.

Renamed cachable to cacheable in both the mesa-replay repo and in this example.

@Logende
Copy link
Contributor Author

Logende commented Dec 24, 2022

Pressed the wrong button. Wanted to say that one more use case would be to resume execution of a simulation run at a checkpoint.

I will add something like a "go_to_step(step: int)" function and a step_backwards() function to cacheable_model. This could then be integrated into the UI to allow the user to jump to some step or move backwards.

Edit: I think I will go with just a function to resume at some checkpoint because moving backwards would make this a little more complicated. It would be easily possible with the regular CacheableModel, but it would not be with StreamingCacheable model.

@rht
Copy link
Contributor

rht commented Dec 24, 2022

Pressed the wrong button. Wanted to say that one more use case would be to resume execution of a simulation run at a checkpoint.

I will add something like a "go_to_step(step: int)" function and a step_backwards() function to cacheable_model. This could then be integrated into the UI to allow the user to jump to some step or move backwards.

No worries about this one for this PR. I will open an issue in https://github.com/Logende/mesa-replay.

@Logende
Copy link
Contributor Author

Logende commented Dec 24, 2022

Pressed the wrong button. Wanted to say that one more use case would be to resume execution of a simulation run at a checkpoint.

I will add something like a "go_to_step(step: int)" function and a step_backwards() function to cacheable_model. This could then be integrated into the UI to allow the user to jump to some step or move backwards.

No worries about this one for this PR. I will open an issue in https://github.com/Logende/mesa-replay.

Already implemented and tested 😸
Logende/mesa-replay@ff94c06

@rht
Copy link
Contributor

rht commented Dec 24, 2022

cc: @snunezcr. Finally, we are having a model replay feature!

@Tortar
Copy link

Tortar commented Dec 27, 2022

I want just to say that we should really try to merge mesa-replay inside mesa itself in the future overcoming the last difficulties, because it can be a very useful functionality, for users to benefit the most it should be in the main repo :-)

@rht
Copy link
Contributor

rht commented Dec 28, 2022

I want just to say that we should really try to merge mesa-replay inside mesa itself in the future overcoming the last difficulties, because it can be a very useful functionality, for users to benefit the most it should be in the main repo :-)

That's the plan. Currently, it's a separate repo so that it's more convenient to do breaking changes. Also, the cache size is too big at the moment.

@jackiekazil any comments?

@rht
Copy link
Contributor

rht commented Dec 30, 2022

Merging now. Thank you @Logende !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants