Skip to content

Commit cbb666b

Browse files
Rename params (#253)
* first sweep of renaming * fix destroyed types * parameter table column renamed to label * param and param_labels, params!, seem to work * allow partial execution of unit tests * remove non existing tests * fix model unittests * remove unnessary test layer * finish replacing * all unit tests passed * rename param_values -> params * add StatsAPI as dep * add coef and coefnames * rename df => dof (#254) * rename df => dof * import dof from StatsAPI * rename dof file * rename sem_fit => fit * typo * add nobs and fix testsw * add coeftable * fix proximal tests * fix exports and StatsAPI docstrings * fix tests * fix tests * thx evie for the typo :) * fix coeftable --------- Co-authored-by: Maximilian Ernst <[email protected]>
1 parent 955a181 commit cbb666b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+488
-398
lines changed

Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
1818
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
1919
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
2020
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
21+
StatsAPI = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
2122
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
2223
StenoGraphs = "78862bba-adae-4a83-bb4d-33c106177f81"
2324
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Models you can fit include
1515
- Multigroup SEM
1616
- Sums of arbitrary loss functions (everything the optimizer can handle).
1717

18-
# What are the merrits?
18+
# What are the merits?
1919

2020
We provide fast objective functions, gradients, and for some cases hessians as well as approximations thereof.
2121
As a user, you can easily define custom loss functions.

docs/src/developer/loss.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ model = SemFiniteDiff(
7979
loss = (SemML, myridge)
8080
)
8181
82-
model_fit = sem_fit(model)
82+
model_fit = fit(model)
8383
```
8484

8585
This is one way of specifying the model - we now have **one model** with **multiple loss functions**. Because we did not provide a gradient for `Ridge`, we have to specify a `SemFiniteDiff` model that computes numerical gradients with finite difference approximation.
@@ -117,17 +117,17 @@ model_new = Sem(
117117
loss = (SemML, myridge)
118118
)
119119
120-
model_fit = sem_fit(model_new)
120+
model_fit = fit(model_new)
121121
```
122122

123123
The results are the same, but we can verify that the computational costs are way lower (for this, the julia package `BenchmarkTools` has to be installed):
124124

125125
```julia
126126
using BenchmarkTools
127127

128-
@benchmark sem_fit(model)
128+
@benchmark fit(model)
129129

130-
@benchmark sem_fit(model_new)
130+
@benchmark fit(model_new)
131131
```
132132

133133
The exact results of those benchmarks are of course highly depended an your system (processor, RAM, etc.), but you should see that the median computation time with analytical gradients drops to about 5% of the computation without analytical gradients.
@@ -241,7 +241,7 @@ model_ml = SemFiniteDiff(
241241
loss = MaximumLikelihood()
242242
)
243243
244-
model_fit = sem_fit(model_ml)
244+
model_fit = fit(model_ml)
245245
```
246246

247247
If you want to differentiate your own loss functions via automatic differentiation, check out the [AutoDiffSEM](https://github.com/StructuralEquationModels/AutoDiffSEM) package.

docs/src/developer/optimizer.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ algorithm(optimizer::SemOptimizerName) = optimizer.algorithm
3434
options(optimizer::SemOptimizerName) = optimizer.options
3535
```
3636

37-
Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `sem_fit(engine = :Name)`.
37+
Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `fit(engine = :Name)`.
3838
Similarly, `SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)` should be defined as well as a constructor that uses only keyword arguments:
3939

4040
´´´julia
@@ -46,10 +46,10 @@ SemOptimizerName(;
4646
´´´
4747
A method for `update_observed` and additional methods might be usefull, but are not necessary.
4848

49-
Now comes the substantive part: We need to provide a method for `sem_fit`:
49+
Now comes the substantive part: We need to provide a method for `fit`:
5050

5151
```julia
52-
function sem_fit(
52+
function fit(
5353
optim::SemOptimizerName,
5454
model::AbstractSem,
5555
start_params::AbstractVector;

docs/src/internals/files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Source code is in the `"src"` folder:
1111
- `"types.jl"` defines all abstract types and the basic type hierarchy
1212
- `"objective_gradient_hessian.jl"` contains methods for computing objective, gradient and hessian values for different model types as well as generic fallback methods
1313
- The four folders `"observed"`, `"implied"`, `"loss"` and `"diff"` contain implementations of specific subtypes (for example, the `"loss"` folder contains a file `"ML.jl"` that implements the `SemML` loss function).
14-
- `"optimizer"` contains connections to different optimization backends (aka methods for `sem_fit`)
14+
- `"optimizer"` contains connections to different optimization backends (aka methods for `fit`)
1515
- `"optim.jl"`: connection to the `Optim.jl` package
1616
- `"frontend"` contains user-facing functions
1717
- `"specification"` contains functionality for model specification

docs/src/performance/mixed_differentiation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,15 +19,15 @@ model_ridge = SemFiniteDiff(
1919

2020
model_ml_ridge = SemEnsemble(model_ml, model_ridge)
2121

22-
model_ml_ridge_fit = sem_fit(model_ml_ridge)
22+
model_ml_ridge_fit = fit(model_ml_ridge)
2323
```
2424

2525
The results of both methods will be the same, but we can verify that the computation costs differ (the package `BenchmarkTools` has to be installed for this):
2626

2727
```julia
2828
using BenchmarkTools
2929

30-
@benchmark sem_fit(model)
30+
@benchmark fit(model)
3131

32-
@benchmark sem_fit(model_ml_ridge)
32+
@benchmark fit(model_ml_ridge)
3333
```

docs/src/performance/mkl.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ To check the performance implications for fitting a SEM, you can use the [`Bench
2727
```julia
2828
using BenchmarkTools
2929

30-
@benchmark sem_fit($your_model)
30+
@benchmark fit($your_model)
3131

3232
using MKL
3333

34-
@benchmark sem_fit($your_model)
34+
@benchmark fit($your_model)
3535
```

docs/src/performance/simulation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ models = [model1, model2]
100100
fits = Vector{SemFit}(undef, 2)
101101

102102
Threads.@threads for i in 1:2
103-
fits[i] = sem_fit(models[i])
103+
fits[i] = fit(models[i])
104104
end
105105
```
106106

docs/src/performance/starting_values.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Starting values
22

3-
The `sem_fit` function has a keyword argument that takes either a vector of starting values or a function that takes a model as input to compute starting values. Current options are `start_fabin3` for fabin 3 starting values [^Hägglund82] or `start_simple` for simple starting values. Additional keyword arguments to `sem_fit` are passed to the starting value function. For example,
3+
The `fit` function has a keyword argument that takes either a vector of starting values or a function that takes a model as input to compute starting values. Current options are `start_fabin3` for fabin 3 starting values [^Hägglund82] or `start_simple` for simple starting values. Additional keyword arguments to `fit` are passed to the starting value function. For example,
44

55
```julia
6-
sem_fit(
6+
fit(
77
model;
88
start_val = start_simple,
99
start_covariances_latent = 0.5

docs/src/tutorials/collection/multigroup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ model_ml_multigroup = SemEnsemble(
8181
We now fit the model and inspect the parameter estimates:
8282

8383
```@example mg; ansicolor = true
84-
fit = sem_fit(model_ml_multigroup)
84+
fit = fit(model_ml_multigroup)
8585
update_estimate!(partable, fit)
8686
details(partable)
8787
```

0 commit comments

Comments
 (0)