You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/developer/imply.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,10 +30,10 @@ To make stored computations available to loss functions, simply write a function
30
30
31
31
Additionally, you can specify methods for `gradient` and `hessian` as well as the combinations described in [Custom loss functions](@ref).
32
32
33
-
The last thing nedded to make it work is a method for `n_par` that takes your imply type and returns the number of parameters of the model:
33
+
The last thing nedded to make it work is a method for `nparams` that takes your imply type and returns the number of parameters of the model:
34
34
35
35
```julia
36
-
n_par(imply::MyImply) =...
36
+
nparams(imply::MyImply) =...
37
37
```
38
38
39
39
Just as described in [Custom loss functions](@ref), you may define a constructor. Typically, this will depend on the `specification = ...` argument that can be a `ParameterTable` or a `RAMMatrices` object.
If you want to differentiate your own loss functions via automatic differentiation, check out the [AutoDiffSEM](https://github.com/StructuralEquationModels/AutoDiffSEM) package (spoiler allert: it's really easy).
273
+
If you want to differentiate your own loss functions via automatic differentiation, check out the [AutoDiffSEM](https://github.com/StructuralEquationModels/AutoDiffSEM) package.
Copy file name to clipboardExpand all lines: docs/src/developer/sem.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,8 @@ struct SemFiniteDiff{
11
11
observed::O
12
12
imply::I
13
13
loss::L
14
-
optimizer::Dend
14
+
optimizer::D
15
+
end
15
16
```
16
17
17
18
Additionally, we need to define a method to compute at least the objective value, and if you want to use gradient based optimizers (which you most probably will), we need also to define a method to compute the gradient. For example, the respective fallback methods for all `AbstractSemSingle` models are defined as
@@ -64,17 +64,19 @@ Let's introduce some constraints:
64
64
65
65
(Of course those constaints only serve an illustratory purpose.)
66
66
67
-
We first need to get the indices of the respective parameters that are invoved in the constraints. We can look up their labels in the output above, and retrieve their indices as
67
+
We first need to get the indices of the respective parameters that are invoved in the constraints.
68
+
We can look up their labels in the output above, and retrieve their indices as
The bound constraint is easy to specify: Just give a vector of upper or lower bounds that contains the bound for each parameter. In our example, only parameter number 11 has an upper bound, and the number of total parameters is `n_par(model) = 31`, so we define
75
+
The bound constraint is easy to specify: Just give a vector of upper or lower bounds that contains the bound for each parameter. In our example, only the parameter labeled `:λₗ` has an upper bound, and the number of total parameters is `n_par(model) = 31`, so we define
74
76
75
77
```@example constraints
76
78
upper_bounds = fill(Inf, 31)
77
-
upper_bounds[11] = 0.5
79
+
upper_bounds[parind[:λₗ]] = 0.5
78
80
```
79
81
80
82
The equailty and inequality constraints have to be reformulated to be of the form `x = 0` or `x ≤ 0`:
@@ -84,6 +86,8 @@ The equailty and inequality constraints have to be reformulated to be of the for
84
86
Now they can be defined as functions of the parameter vector:
85
87
86
88
```@example constraints
89
+
parind[:y3y7] # 29
90
+
parind[:y8y4] # 30
87
91
# θ[29] + θ[30] - 1 = 0.0
88
92
function eq_constraint(θ, gradient)
89
93
if length(gradient) > 0
@@ -94,6 +98,8 @@ function eq_constraint(θ, gradient)
94
98
return θ[29] + θ[30] - 1
95
99
end
96
100
101
+
parind[:λ₂] # 3
102
+
parind[:λ₃] # 4
97
103
# θ[3] - θ[4] - 0.1 ≤ 0
98
104
function ineq_constraint(θ, gradient)
99
105
if length(gradient) > 0
@@ -109,7 +115,7 @@ If the algorithm needs gradients at an iteration, it will pass the vector `gradi
109
115
With `if length(gradient) > 0` we check if the algorithm needs gradients, and if it does, we fill the `gradient` vector with the gradients
110
116
of the constraint w.r.t. the parameters.
111
117
112
-
In NLopt, vector-valued constraints are also possible, but we refer to the documentation fot that.
118
+
In NLopt, vector-valued constraints are also possible, but we refer to the documentation for that.
113
119
114
120
### Fit the model
115
121
@@ -153,10 +159,11 @@ As you can see, the optimizer converged (`:XTOL_REACHED`) and investigating the
0 commit comments