Geom_density(alpha=0.5, fill=colourcodes, colour=colourcodes) + Ggplot(hill_nls_outcomes, aes(x=estimate, colour=term, fill=term)) + Mutate(outpars = map(hill_nls_fit, ~broom::tidy(.x))) %>% Plot_nls( pfdata$hill_nls_fit], pfdata$pf])Īnd let’s check the distribution of the outcomes. Let’s check a couple of fits to make sure things went ok plot_nls( pfdata$hill_nls_fit], pfdata$pf]) Mutate(hill_nls_fit = map(pf, ~hill_nls_fit_func(.x))) So, now let’s use purrr to fit all the curves. That works! And how do the values compare to our multstart values? coef(hill_nls_fit) # multstart So let’s try with some better parameters: hill_nls_fit2 <- minpack.lm::nlsLM(parentFraction ~ hillfunc(Time, a, b, c), This usually either means that our model doesn’t work right, or that our starting parameters are bad. Singular gradient matrix at initial parameter estimates I get the error Error in nlsModel(formula, mf, start, wts) : If I do as follows, with some thumb-suck starting parameters hill_nls_fit2 <- minpack.lm::nlsLM(parentFraction ~ hillfunc(Time, a, b, c), However, here we have to define our starting parameters more carefully. # Number of iterations to convergence: 31Īnother way that we could do things would be to use minpack.lm. I’ll just define it on the log scale instead. Looks pretty ok! Because the c estimate is so high, we might have some issues with finding it, and with defining a prior for it. Geom_line(data = predframe, aes(x=Time, y=ParentFrac)) Ggplot(data, aes(x=Time, y=parentFraction)) + Mutate(ParentFrac = predict(nls_object, newdata = list(Time=.$Time))) # Achieved convergence tolerance: 1.49e-08 # Number of iterations to convergence: 40 # Residual standard error: 0.02753 on 6 degrees of freedom This package uses the minpack.lm package under the hood, which makes use of the Levenberg–Marquardt algorithm, and fits with a set of different starting parameters.įirst let’s take a look at how the data look pfdat |t|) To get around this problem, and to avoid fitting failures, we can fit the model multiple times, starting in different locations, and choose the one with the best fit: that’s what the nls.multstart package does. These are positions that are the minimum in their local neighbourhood, but not the lowest point in the whole of parameter space. However, gradient descent can land in so-called local minima. So, gradient descent is more efficient than a grid of locations. It will stop jumping when it feels that it is flat underfoot, and that all directions go upwards. Instead, the direction and size of the jumps are determined by the gradients of the cost function below its feet, i.e. if it feels that it’s on a downhill, it will jump in the direction that takes it down fastest. ![]() We could do this by exploring a grid of potential parameter values, but this is extremely inefficient, and would only be as fine-grained as our grid was. We can analogise this by considering the algorithm hopping around parameter space to find the lowest point (i.e. with the lowest ‘cost’ - the residual sum of squares). ![]() NLS makes use of gradient descent to find the set of parameters which maximise the likelihood of the data under them. This is the most common way that this is currently performed in the field. Select(PET, Subjname, PETNo, Genotype, pf)įitting using nonlinear least squares (NLS) with the nls.multstart packageįirst, I’ll go through how to fit these curves one by one using NLS. Mutate(pf = map(jsondata, extract_metabolite)) %>% So, now let’s put everything together a little bit more neatly. ![]() # "sampleStartTime" "sampleDuration" "parentFraction" Note that the durations are set to zero, because they were never recorded. The data can be found in the kinfitr package using the following: data(pbr28)Īnd looking in the Metabolite section of each individual’s JSON data. The data itself comes from a dataset of PBR28 data. # to the package is available through vignette('brms_overview'). # Loading 'brms' package (version 2.12.0). # The following object is masked from 'package:dplyr': # x dplyr::filter() masks stats::filter() # - Attaching packages - tidyverse 1.3.0. #remotes::install_github("mathesong/kinfitr") The exercise was helpful for me, and I’ve gone back to it several times to grab the syntax for new applications, so I thought I’d write it up for others (and also so I don’t have to go and find it each time on my own computer.). A while back, I was messing about with a simple pharmacological model called the Hill function in order to figure out the syntax for nonlinear modelling using several different tools in R.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |