table of contents
- introduction
- getting started
- exercise 1: start and monitor basic nucleotide runs
- exercise 2: inspect and compare the final results of a run
- exercise 3: constrain a search
- exercise 4: bootstrap analyses
- exercise 5: bootstrapping and ascertainment bias
- exercise 6: use a partitioned model
- In order to compete these exercises you will need to have a tree viewer and a text editor installed, and be able to access the cluster and move files to and from the cluster.
introduction
This section explains dataset and the commmands, you don't need to run them here!
Example data set
We will be using a small 29 taxon by 1218 nucleotide mammal data set that was taken from a much larger data set (44 taxa, 17kb over 20 genes) presented in (Murphy 2001). The genes included here are RAG1 and RAG2 (Recombination Activating Gene). This is a difficult data set because the internal branches are quite short, and the terminals quite long.
This is a hard phylogenetic problem, and phylogenetic estimates are less repeatable on this data set than on many others of similar size, so consider this a "worst case" data set. There are definitely local topological optima. The trees inferred using this gene also do not match our understanding of the relationships of these taxa in many places, but that is not really important for our purposes here.
We will run several analyses using two different software tools for estimating Maximum Likelihood phylogenies, Garli and RAxML
Garli
GARLI reads all of its settings from a configuration file. By default it looks for a file named garli.conf, but other configuration files can be specified as a command line argument after the executable (e.g., if the executable were named garli, you would type "garli myconfig.conf"). Note that most of the settings typically do not need to be changed from their default values, at least for exploratory runs. We will only experiment with some of the interesting settings in this demo, but detailed discussions of all settings can be found on the support web site.
The config file is divided into two parts: [general] and [master]. The [general] section specifies settings such as the data set to be read, starting conditions, model settings, and output files. To start running a new data set, the only setting that must be specified is the datafname, which specifies the file containing the data matrix in Nexus, PHYLIP, or FASTA format. The [master] section specifies settings that control the functioning of the genetic algorithm itself, and typically the default settings can be used. Basic template configuration files are included with any download of the program.
Run garli by calling the software and telling it the name of the configuration file to use.
To run:
garli ‹garli config file›
RAxML
RAxML reads its configuration information from command line flags.
We will walk through a few commonly used arguments in this lab, but there are many potential options, which are described in the
RAxML Manual.
The command line flags required for every raxml run are
raxmlHPC -m ‹model of evolution› -p ‹random number seed› -s ‹alignment file (in fasta or phylip format)› -n ‹output_name›
These flags can be used in any order, but it makes things much less confusing if you keep them in the same order when you run different analyses.
I like to keep the model of evolution first, and data file and the output file name last. You can type in a number after -p for the random number seed, or use $RANDOM, which will choose a random number for you. Using a seed allows you to re-run analyses and get the same results. But if you want to compare different results, be sure to change the random seed!
NOTE: RAxML will not automatically overwrite files form previous runs with the same name. Sometimes this is convenient, sometimes it is annoying.
If you get the error message RAxML output files with the run ID ‹output_name› already exist" either delete those files, or use a new name for your output.
We will set up several analyses in both RAxML and Garli for comparison and to explore tree searching, likelihood calculations, models of evolution and bootstrapping.
getting started
- Download the package of files used in this activity: MLSearch.zip
Login to the cluster and typewget https://mctavishlab.github.io/assets/MLSearchLab.zip
- Uncompress the zip file using "unzip MLSearchLab.zip"
- You will need to make edits to the configuration file and command line arguments in most cases.
exercise 1: start a basic nucleotide run
- Change to the MLSearchLab directory
- Open the garli_ex1.conf file in a text editor (either using nano on the cluster or editing via Cyberduck).
Garli
There is not much that needs to be changed in the config file to start a preliminary run. In this example config file, a number of changes from the defaults have been made so that the example is more instructive and a bit faster (therefore, do NOT use the settings in this file as defaults for any serious analyses). You will still need to change a few things yourself. Note that the configuration file specifies that the program perform two independent search replicates (searchreps = 2). Also note that taxon 1 (Opossum) is set as the outgroup (outgroup = 1).
Make the necessary changes:
- Set datafname = datafiles/murphy29.rag1rag2.nex
- Set ofprefix = garli_run1. This will tell the program to begin the name of all output files with "garli_run1...".
- We are going to use a GTR+I+G model of sequence evolution. This the default model set in the configuration file, so you don't need to change anything, but look at the section that sets up the evolutionary model:
datatype = dna ratematrix = 6rate statefrequencies = empirical ratehetmodel = gamma numratecats = 4 invariantsites = estimates
We are using empirical base frequencies in order to easilt compare garli and raxml estimates - Save the file
- Run Garli using that configuration file garli garli_ex1.conf
You will see a bunch of text scroll by, informing you about the data set and the run that is starting. Most of this information is not very important, but if the program stops be sure to read it for any error messages. The output will also contain information about the progress of the initial optimization phase, and as the program runs it will continue to log information to the screen. This output contains the current generation number, current best lnL score, optimization precision and the generation at which the last change in topology occurred. All of the screen output is also written to a log file that in this case will be named garli_run1.screen.log, so you can come back and look at it later.
(These are not things that you would normally need to do with your own analyses)
We will do this just in Garli, because it is more straightforward than monitoring a run in RAxML- Look in the MLSearchLab directory, and note the files that have been created by the run.
- Open garli_run1.log00.log in a text editor. (Either on the cluster or via Cyberduck.)
This file logs the current best lnL, runtime, and optimization precision over the course of the run. It is useful for plotting the lnL over time. Next, we will look at the file that logs all of the information that is output to the screen.
- Open garli_run1.screen.log in a text editor.
This file contains an exact copy of the screen output of the program. It can be useful when you go back later and want to know what you did. In particular, check the "Model Report" near the start to ensure that the program is using the correct model.
Now let's look at the current best topology. This is contained in a file called garli_run1.best.current.tre. This file is updated every saveevery generations, so it is always easy to see the current best tree during a search. (Do not use this as a stopping criterion and kill the run when you like the tree though!)
- Open garli_run1.best.current.tre in Figtree or another tree viewer and examine the tree. (You may be able to double-click the file and associate .tre files with Figtree.)
Now lets run the same analysis in RAxML.
Instead of using a config file, you will provide the same (or similar) information via command line flags. We will used the required flags and add
'-# 2', to tell RAxML to run 2 search replicates, like we did in Garli.
We are using same dataset as we used for Garli, but in fasta file format, rather than nexus. (Take a look at the files in MLsearchLab/datafiles to familiarize yourself.)
To set the evolutionary model to GTR+I+G, as we did in the Garli run, we will use '-m GTRGAMMAI'.
RAxML also implements a similar, faster model, GTRCAT, that uses a different approximation to capture rate heterogeneity across sites. GTRCAT is faster and can yield trees with slightly better likelihood values (see Stamatakis 2006). It is not a good idea to use the CAT approximation of rate heterogeneity on datasets
with less than 50 taxa. In general there will not be enough data per alignment column available to reliably estimate the per-site rate parameters. We only have 29 taxa here, so we will stick with GTRGAMMA.
We will compare the likelihoods of our trees from our RAxML and Garli searches in Exercise 3
- raxmlHPC -m GTRGAMMAI -p ‹choose a random number seed› -# 2 -s datafiles/murphy29.rag1rag2.fasta -n rax01
exercise 2: inspect the final results of a run
Garli
Here are a few things that you wouldn't normally need to look at, but that might help understand how Garli tree search works:
We can examine how the topology and branch lengths changed over the entire run. The garli_run1.rep1.treelog00.tre file contains each successively better scoring topology encountered during the first search replicate. Note that this file can be interesting to look at, but in general will not be very useful to you. The default is for this file to not be created at all.
- Open garli_run1.rep1.treelog00.tre in Figtree. (This will require transferring the file to your computer)
- Click through all of the trees (using the arrows on the upper right).
Note how the tree slowly changes over the run. We can also get other information from the treelog file.
- Open the garli_run1.rep1.treelog00.tre file in a text editor.
You will see a normal Nexus trees block. Each tree line includes comments in square brackets containing the lnL of that individual, the type of topological mutation that created it (mut = 1 is NNI, 2 and 4 are SPR, 8 and 16 are local SPR) and the model parameters of that individual. (The "M1" in the model string indicates that this is the first model.) For example:
tree gen1= [&U] [-10286.10914 mut=8][ M1 r 1 4 1 1 4 e 0.25 0.25 0.25 0.25 a 0.922 p 0.394 ]
If you scroll through and look at the mutation types, you will probably notice that a mix of all three topological mutation types were creating better trees early on, but the very local NNI mutations dominate at the end of the run. The model parameters that were associated with each tree during the run appear within a comment. They are specified with a simple code, and this model string is in exactly the same format that you would use to provide GARLI starting model parameter values. The parameter codes are:
r = relative rate matrix e = equilibrium frequencies a = alpha shape parameter of gamma rate heterogeneity distribution p = proportion of invariable sites
The config files used here are set up to use a feature of the program that collapses internal branches that have an MLE length of zero. This may result in final trees that have polytomies. This is generally the behavior that one would want. Note that the likelihoods of the trees will be identical whether or not the branches are collapsed. However, this will affect Robinson Foulds distances!
Things that you should examine with your own analyses: (Garli and RAxML)
The information that you really want from the program are the best trees found in each search replicate and the globally best across all replicates. In Garli after each individual replicate finishes, the best trees from all of the replicates completed thus far are written to the .best.all.tre file. When all replicates have finished, the best tree across all replicates is written to the .best.tre file. In RAxML the best trees from the each replicate are saved to RAxML_result.rax01.RUN.0 and RAxML_result.rax01.RUN.1. and the best tree is saved to the RAxML.bestree. file.
- Lets more closely examine our results.
- First, take a look at the end of the Garli .screen.log file. You will see a report of the scores of the final tree from each search replicate, an indication of whether they are the same topology, and a table comparing the parameter values estimated on each final tree.
For RAxML, examine the likelihoods for your two different tree searches at the end of your RAxML.info file.
Within each of your analyses, are two possibilities:
- The search replicates found the same best tree. You should see essentially identical lnL values and parameter estimates. The screen.log file should indicate that the trees are identical.
- The search replicates found two different trees. This is almost certainly because one or both were trapped in local topological optima. You will notice that the lnL values are somewhat different and the parameter estimates will be similar but not identical. The search settings may influence whether searches are entrapped in local optima, but generally the default settings are appropriate.
- Did your two Garli replicates find the same tree?
- Did your two RAxML replicates find the same tree?
- Transfer the best tree from each analysis to your computer, and open them using Figtree. (RAxML_bestTree.rax01 and garli_run1.best.tre)
- Are the topologies the same? Are the branch lengths?
It isn't easy (or reasonable) to check by eye! And because of different choices made in the likelihood computation, the likelihoods are not directly comparable.
- On the cluster, use the 'treecompare.py' script to compare the differences between the two ML tree estimates. This script relies on Dendropy, by Sukumaran and Holder to perform tree comparisons.
- What are the RF and weighted RF distances between your two best tree estimates?
python treecompare.py RAxML_bestTree.rax01 ‹treefile format› garli_run1.best.tre ‹treefile format›,
(open the tree files in your text editor to figure out what the tree file formats are for these two trees.)
We can also more thoroughly evaluate and compare the results of our two different searches in PAUP*. Being able to open the results of one program in another for further analysis is a good skill to have.
- In the MLSearchLab directory, execute the datafiles/murphy29.rag1rag2.nex file in PAUP*. (At the command line, type "paup datafiles/murphy29.rag1rag2.nex".)
- Read your tree estimate from Garli into PAUP. To read the file into PAUP*, type "gettrees file=garli_run1.best.tre".)
- Read your tree estimate from RAxML into PAUP. Using 'mode=7' will keep your tree from Garli in memory as well. To read the file into PAUP*, type "gettrees unrooted mode=7 file=RAxML_bestTree.rax01".)
- Set your model of evolution to GTR+I+G in PAUP by running "lset nst=6 rmat=estimate basefreq=empirical shape=estimate rates=gamma pinv=est"
- In PAUP*, score your two trees by running "lscore".
This tells PAUP* to use the two trees you have imported but to optimize all the parameter values. After a bit you will see the optimized lnL score and parameter values for each tree as estimated by PAUP*.
- How do the scores for the two trees compare with each other? (these scores were calculated in the same way, and are therefore directly comparable)
- How do they compare to the likelihood estimates from Garli and RAxML?
- Come up to the projector, and write the likelihood score for each tree in the appropriate column.
Since we've already loaded the trees into PAUP*, we can also use PAUP* to compare the two trees and see if they differ (and by how much).
- In PAUP*, type "treedist".
This will show what is termed the "Symmetric Tree Distance" between the trees. It is a measure of how similar they are, and is the number of branches that appear in only one of the trees. If the trees are identical, the distance will be zero. The maximal distance between two fully resolved trees is 2*(# sequences - 3).
If the trees are different, we can calculate a consensus tree in PAUP* and see exactly where they agree. Note that in general you should choose the best scoring tree as your ML estimate instead of a consensus.
- In PAUP*, type "contree /strict=yes majrule=yes LE50=yes".
This will show a strict consensus and a majority rule consensus. The strict consensus completely collapses branches that conflict, but since GARLI already collapsed zero length branches it is hard to tell where the conflict is. The majority rule consensus will show 50% support for branches that were only found in one tree, but it is not possible to show them all in a single tree.
- To quit paup type "quit".
exercise 3: constrain a search
If you looked carefully at any of the trees you've inferred (and know something about mammals), you may have noticed that the relationships are somewhat strange (and definitely wrong) in places. One relationship that this small data set apparently resolves correctly is the sister relationship of Whale and Hippo. This relationship (often termed the "Whippo" hypothesis) was once controversial, but is now fairly well accepted. If we are particularly interested in this relationship we might want to know how strongly the data support it. One way of doing this would be simply by looking at the bootstrap support for it, but we might be interested in a more rigorous hypothesis test such as a simulation based parametric bootstrap test, a Shimodaira-Hasegawa (SH) test or an Approximately Unbiased (AU) test.
The first step in applying one of the topological hypothesis tests is to find the best topology that does NOT contain the Whippo relationship. This is done by using a constrained topology search. In this case we want a negative (also called converse) constraint that searches through tree space while avoiding any tree that places Whale and Hippo as sisters.
It is possible to set positive constraints in RAxML, but it is not possible to set negative constraints, so this section we will only use GARLI. GARLI allows constraints to be specified in a number of ways. We will do it by specifying inputing a multifurcating tree, with only the bipartition being constrained.
- Use a text editor to open a new text file.
Constraints can either be positive (MUST be in the inferred tree) or converse (also called negative, CANNOT be in the inferred tree). The constraint type is specified to GARLI by putting a + or - at the start of the constraint string in the constraint file.
- On the first line, type a '-' for a negative constraint, and then the string specifying the (Whale, Hippo) grouping -(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,22,23,24,25,26,27,28,29,(20,21)) Rather than having to count the lines in the input nexus file, you can look at your garli_run1.best.tre output from the previous exercise to see that Whale and Hippo are 20 and 21. Take a look at this constraint tree in Figtree.
- Save the file as whippoNegative.txt.
Now we need to tell GARLI to use the constraint. The garli_constrained.conf file has already been set up to be similar to the one we used during the unconstrained search earlier, so we only need to make minimal changes.
Edit the garli_constraint.conf file and set constraintfile = whippoNegative.txt. - Set ofprefix = constrainedRun1.
- Save the config file.
- Run GARLI: garli garli_constrained.conf
Constrained searches can make searching through treespace more difficult (you can think of it as walls being erected in treespace that make getting from tree X to tree Y more difficult), so you may see that the two constrained search replicates result in very different lnL scores. When the run finishes, note the difference in lnL between the best tree that you found earlier and the best constrained tree. This difference is a measure of how strongly the data support the presence of the (Whale, Hippo) group relative to its absence. Unfortunately we can't simply do a likelihood ratio test here to test for the significance of the difference because we have no expectation for how this test statistic should be distributed under the null hypothesis. That is what the parametric bootstrap, SH or AU test would tell us, but is well beyond the scope of this demo. (For an extensive discussion see Holder and McTavish 2016)
exercise 4: bootstrap analyses
One way to check if your phylogenetic estimates are resilient to sampling error is to use bootstrapping, or re-sampling from your alignment and re-estimating your tree. This can be an important way to assess uncertainty due to data sampling. However! with large data sets, and any form of systematic error, you will get 100% even for the wrong relationships. THEREFORE: low bootstrap support may be a signal of unreliable relationships, but the converse is not true. High bootstrap support suggests that the inferred relationships are not sensitive to your data sampling (very common with genomic data), but not that they are correct!
- In RAxML we can use the flags '-# ‹number of bootstrap replicates›', and '-b' ‹bootstrap random number seed›
raxmlHPC -m GTRGAMMAI -p ‹choose a random number seed› -# 100 -b ‹bootstrap random number seed› -s datafiles/murphy29.rag1rag2.fasta -n rax_boot
However, this will take too long to run! Kill this run using Ctrl-C. - A pre-baked bootstrap run is in the folder 'bootstrap_results'. We will use this one instead.
We can then apply those bootstrap proportions to our best scoring tree from our first analysis, by using the '-f' flag to select an algorithim. We will use '-f b' to draw bipartition information on a tree, which we provide with the '-t' flag
raxmlHPC -m GTRGAMMAI -f b -t RAxML_bestTree.rax01 -z bootstrap_results/RAxML_bootstrap.rax_boot -n boot_bipart - Look at the file RAxML_bipartitions.boot_bipart You can open it in figtree - it will ask you what the labels represent. They represent bootstrap proportions. In the view menu on the left in figtree, set the node labels to display bootstrap proportions. (Although beware - some treeviewers can incorrectly display support labes on nodes, especially following re-rooting. See: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5435079/)
Other external program can also be used to map bootstrap replicates onto biparitions or estimate consensus trees. Good options are PAUP*, and a nice program called Sumtrees (it requires a Python installation, and is available at http://pythonhosted.org/DendroPy/programs/sumtrees.html).
- Execute the datafiles/murphy29.rag1rag2.nex data in PAUP (type "paup datafiles/murphy29.rag1rag2.nex").
- In PAUP*, clear any existing trees in memory using "cleartrees"
- Load the bootstrap trees with the command "gettrees unrooted file=bootstrap_results/RAxML_bootstrap.rax_boot".
- In PAUP*, make the majority rule consensus tree with the command "contree /strict=no majrule=yes LE50=yes grpfreq=no".
This will display the majority-rule bootstrap consensus tree, including branches appearing in less than 50% of the trees (LE50). You will notice that some parts of the tree are very poorly supported, while others have high support. It is somewhat comforting that the parts of the tree that we know are resolved incorrectly receive low support. This is precisely why phylogenetic estimates MUST be evaluated in light of some measure of confidence, be it bootstrap values or posterior probabilities.
exercise 5: Ascertainment bias and bootstrapping
Simulated data can be a great way to investigate model misspecification and biases. This exercise uses a data set simulated on a Felsenstein Zone tree. Data was simulated using seq-gen
- Many datasets are enriched for variable sites, or include exclusively alignment columns with variable sites, i.e. Single nucleotide polymorphism (SNP) data. This is an example of 'ascertainment bias'. The data that you have 'ascertained' and included in your alignment are not a random subset of the genome. This does not include invariant sites, or sites at which repeated mutations have resulted in the same base at the tips (homoplasy).
-
Lets take a simulated data set that has not been subject to any ascertainment bias sim_noasc.phy, and estimate a tree:
raxmlHPC -m GTRGAMMA -p 2 -s datafiles/sim_noasc.phy -n no_asc_bias
Use the treecompare.py script to compare the best tree estimate to the true tree, the tree that the data was simulated on datafiles/sim.tre
Did you get the correct tree?
-
Now lets run that same analysis on only the variable sites from that alignment, sim_variablesites.phy. This is the exact same alignment, but with the invariant columns removed.
raxmlHPC -m GTRGAMMA -p 2 -s datafiles/sim_variablesites.phy -n asc_uncorrected
Did you get the correct tree?
How do the branch lengths differ from the true tree?
- Lets bootstrap it!
raxmlHPC -m GTRGAMMA -p 123 -# 100 -b 123 -s datafiles/sim_variablesites.phy -n asc_uncorr_boot
raxmlHPC -m GTRGAMMA -f b -t RAxML_bestTree.asc_uncorrected -z RAxML_bootstrap.asc_uncorr_boot -n asc_uncorr_bipart
What is the bootstrap support for the one bipartition in the tree? (You can open it in figtree, but with such a simple tree you can also just look at the text file directly)
Is that bipartition in the true tree?
- However, even if you only have the variable sites, by using an appropriate model of evolution, it is possible to rescue your analysis.
In RAxML you can you a model corrected for ascertainment bias by using a model corrected for these known biases. We will use th ASC_GTRGAMMA model, with the lewis (as in Paul Lewis) correction for discarding sites that don't vary in the alignment.
raxmlHPC -m ASC_GTRGAMMA --asc-corr lewis -p 2 -s datafiles/sim_variablesites.phy -n asc_corrected
Did you get the correct tree?
- Bootstrap it!
raxmlHPC -m ASC_GTRGAMMA --asc-corr lewis -p 2 -# 100 -b 123 -s datafiles/sim_variablesites.phy -n asc_corr_boot
raxmlHPC -m ASC_GTRGAMMA -f b -t RAxML_bestTree.asc_corrected -z RAxML_bootstrap.asc_corr_boot -n asc_corr_bipart
What is the bootstrap support for the one bipartition in the tree?
Is that bipartition in the true tree?
This is the exact same dataset! If our model of evolution is not appropriate for our data, our results can be systematically biased. Incorrect inferences can have 100% bootstrap support, because sampling across our data does not capture the problem.
NOTE: The ascertainment bias corrections in RAxML will not run if there are ANY invariant columns in your alignment.
exercise 6: use a partitioned model
Partitioned models are those that divide alignment columns into discrete subsets a priori, and then apply independent substitution submodels to each. There are a nearly infinite number of ways that an alignment could be partitioned and have submodels assigned, so not surprisingly configuration of these analyses is more complex.
Note that although some models such as gamma rate heterogeneity allow variation in some aspects of the substitution process across sites, a model in which sites are assigned to categories a priori is more statistically powerful IF the categories represent "real" groupings that show similar evolutionary tendencies.
GarliRunning a partitioned analysis requires several steps:
- Decide how you want to divide the data up. By gene and/or by codon position are common choices.
- Decide on specific substitution submodels that will be applied to each subset of the data.
- Specify the divisions of the data (subsets) using a charpartition command in a NEXUS Sets block in the same file as the alignment.
- Configure the proper substitution submodels for each data subset.
- Run GARLI.
Note that detailed instructions and examples are available on this page of the GARLI wiki:
Using partitioned models
On to the actual exercise...
- In the datafiles directory, open murphy29.rag1rag2.charpart.nex in a text editor. Scroll down to the bottom of the file, where a NEXUS Sets block with a bunch of comments appears. Notice how the charset commands are used to assign names to groups of alignment columns. Notice the charpartition command, which is what tells GARLI how to make the subsets that it will use in the analysis.
- Decide how you will divide up the data for your partitioned analysis. For this exercise it is up to you. There are a few sample charpartitions that appear in the datafile. If you want to use one of those, remove the bracket comments around it. If you are feeling bold, make up some other partitioning scheme and specify it with a charpartition. Save the file.
- Now we tell GARLI how to assign submodels to the subsets that you chose. Following is a table of the models chosen by the program Modeltest for each subset of the data. Look up the model for each of the subsets in the partitioning scheme that you chose. Don't worry if you don't know what they mean.
sites rag1 rag2 rag1+rag2 all GTR+I+G K80+I+G SYM+I+G 1st pos GTR+G SYM+G GTR+I+G 2nd pos K81uf+I+G TrN+G GTR+I+G 3rd pos TVM+G K81uf+G TVM+G 1st+2nd GTR+I+G TrN+I+G TVM+I+G - Open the garli_paritioned.conf file. Everything besides the models should already be set up. Scroll down a bit until you see several sections headed like this: [model1], [model2]. This is where you will enter the model settings for each subset, in normal GARLI model format, in the same order as the subsets were specified in the charpartition. The headings [model1] etc MUST appear before each model, and MUST begin with model 1. For example, if you created 3 subsets, you'll need three models listed here. Open the garli_model_specs.txt file. This file will make it much easier to figure out the proper model configuration entries to put into the garli.conf file.
- In the garli_model_specs.txt file, find the models that appeared for your chosen subsets in the table above. For example, if I was looking to assign a model to rag2 2nd positions, the model from the table would be "TrN+G". Find the line that reads "#TrN+G" and copy the 6 lines below it. Now paste those into the garli.conf file, right below a bracketed [model#] line with the proper model number.
- Start partitioned GARLI.
- Peruse the output in the .screen.log file, particularly looking at the parameter estimates and likelihood scores. Note the "Subset rate multiplier" parameters, which assign different mean rates to the various subsets. Note that the likelihood scores of the partitioning scheme that you chose could be compared to the likelihoods of other schemes with the AIC criterion. Details on how to do that appear on the partitioning page of the garli wiki:
Using partitioned models
- We can set the same partitioning schemes as described above for Garli, although RAxML only uses GTR as the model for DNA sequence evolution. Instead of incorporating the partitioning scheme into the data file, we pass in a text file that describes the partitions using the command line argument -q.
- For example:
This partition would split up the two genes, and names the partitions. The first 729 bases in the file are from rag1, and 730-1218 are rag2.
We also need to tell RAxML what kind of data the partition contains
DNA, rag1=1-729 DNA, rag2=730-1218
- If we partition the dataset like this the alpha shape parameter of the Gamma model of rate heterogeneity, the empirical base frequencies, and the evolutionary rates in the GTR matrix will be estimated independently for each of the two genes. RAxML will also estimate a separate set of branch lengths for each partition. However, they will be jointly optimized on a single topology.
- If we want to partition by 1st, 2nd and 3rd codon position we can specify that in the partition file.
DNA, p1=1-1218\3 DNA, p2=2-1218\3 DNA, p3=3-1218\3
- Or we can partition by both codon position and gene.
DNA, s1=1-729\3 DNA, s2=2-729\3 DNA, s3=3-729\3 DNA, s4=730-1218\3 DNA, s5=731-1218\3 DNA, s6=732-1218\3
- Open a text file, and set up your preferred partition scheme, and save it as partition.txt
- raxmlHPC -m GTRGAMMA -p ‹choose a random number seed› -q partition.txt -s datafiles/murphy29.rag1rag2.fasta -n rax_partition
NOTE:Partitioning allows for different rates across genes or sites, but this is still a concatenated analysis. Parameters of the evolutionary model are separately estimated, but they are constrained to the same topology. Next week we will discuss genetree-species tree approaches which allow variation in gene tree topologies to inform the species tree.