INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
Hi there! Can anyone help in answering how to split single .txt file with multiple fasta sequences into multiple files containing each of the fasta sequence? Thank you for your time and help!
using awk awk '/^>/ {if(x>0) close(outname); x++; outname=sprintf("_%d.fa",x); print > outname;next;} {if(x>0) print >> outname;}' *.fasta
biostars
{"uid": 126021, "view_count": 4040, "vote_count": 3}
Hi! I'm new to bioinformatics, but am working with some fastq files that have some strange base quality distributions, see image. It is strange as we see only 4 unique phred scores across the whole file, which seems surprising given illumina sequencing has 41 possible scores. This is happening across multiple files, and these files are straight from the sequencing company. The values correspond to phred scores of "F" "," ":" and "#". I have confirmed this behaviour with multiple people in my team, so this is not an analysis problem this is an issue with the files (also obvious when looking at raw read phred scores). I also found [this][1] other question on Biostars, which it's hard to tell but they seem to have the same behaviour, suggesting perhaps it is a common issue. Does anyone have any idea what is happening? The reads themselves seem normal when compared to reference genome. We have contacted the sequencing company and they haven't really provided clarity so thought that maybe people here could provide some insight. Thanks in advance! ![Aggregation of base qualities over fastq files for same sample][2] [1]: https://forum.qiime2.org/t/fastq-quality-too-good-to-be-true/12674 [2]: /media/images/8e68fa10-5a2a-48e8-a38e-1872eb33
Reposting from @lievensterck comment: "If I remember correctly, was Illumina not gonna change it's qual scores, in binned approach to reduce file size?" From which I found [this][1] answer which seems to be it. [1]: https://www.biostars.org/p/408032/
biostars
{"uid": 9523759, "view_count": 617, "vote_count": 2}
I want to uninstall *Bioconductor* [affycoretools package](https://www.bioconductor.org/packages/release/bioc/html/affycoretools.html) and reinstall it, who can help me?
Remove a package with `remove.packages()` e.g. remove.packages("affycoretools") **Affycoretools** is a Bioconductor pacakge, so reinstallation needs their install script / the **BiocInstaller** package e.g. source("https://bioconductor.org/biocLite.R") biocLite("affycoretools") --------- **Edit:** To install this package, start R (version "4.0") and enter: if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("affycoretools")
biostars
{"uid": 295628, "view_count": 178054, "vote_count": 2}
Hi, I'm an undergraduate student. Please help me. I want mapping-bam-file from sra-dataset from NCBI in order to analyze heterogeneity of mouse ESC. The dataset is generated from a paper below.(GSE60749) *Roshan M.Kumar et al. Deconstructing transcriptional heterogeneity in pluripotent stem cells. Nature(2014)* Please teach me how to get adapter sequence used in this experimentation and what I should use for quality control(Prinseq?ShortRead?). For example, I want to try GEO Sample [GSM1486817][1] sra file. Can anyone give me process of quality control of this sra? I thank you for reading it through. Any help will be appreciated. [1]: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSM1486817
<p>You can find the adapters used by reading a bit and googling around. &quot;Nextera XT DNA Sample preparation reagents (Illumina)&quot; were used to prepare the samples (as found <a href="http://www.ncbi.nlm.nih.gov/sra?term=SRX686234">here</a> under &quot;study summary&quot;, or <a href="http://www.ncbi.nlm.nih.gov/sra/SRX686234/">here</a> under &quot;library&quot;).</p> <p>You can check for adapter contamination with <a href="http://www.bioinformatics.babraham.ac.uk/projects/fastqc/">FastQC</a>.</p> <p><strong>edit:</strong> if the downstream analysis to be performed is mapping with BWA, Bowtie2 or any mapper which performs local alignment, adapter should not have a major impact. Besides, the reads he pointed to in his question are 25+25, which should be shorter than Nextera typical insert size, and adapter contamination should be really low.</p>
biostars
{"uid": 143961, "view_count": 8318, "vote_count": 2}
<p>I have been asked to recommend introductory books and resources to R and Bioconductor. My problem is just, I never read a book to learn R or Bioconductor, so I have no experience with this and cannot recommend one. I am interested in mainly introductory books, possibly targeting various groups of readers (computer scientists, molecular biologists, (bio-)statisticians), any recommendation appreciated. </p> <p>For example, I used the following resources:</p> <ul> <li>The <a href='http://cran.r-project.org/manuals.html'>R-manuals</a>, especially <a href='http://cran.r-project.org/doc/manuals/R-intro.html'>the R intro</a></li> <li>There are also <a href='http://cran.r-project.org/other-docs.html'>a lot of contributed documents there</a> on the R web site, but I didn't use.</li> <li>If a package from Bioconductor interests me, I read the package vignette.</li> <li>I read the Bioconductor mailing list, that helps to see what other people use.</li> <li>I have the "<a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98966-2'>Venebles, Ripley. S Programming</a>" book, that is hardly introductory. </li> </ul> <p>Which books did you find helpful or completely useless to learn R/Bioconductor? For example: <a href='http://www.bioconductor.org/pub/RBioinf/'>R Programming for Bioinformatics</a> looks promising, anybody read it?</p> <p>Or do you share my reluctance towards R-books and prefer online resources?</p>
<p>http://manuals.bioinformatics.ucr.edu/home/R_BioCondManual#TOC-Vectors</p> <p>http://www.bioinformatics.babraham.ac.uk/training.html#rintro</p>
biostars
{"uid": 539, "view_count": 23484, "vote_count": 73}
Dear all, Does STAR 2.6 `--quantMode GeneCounts` only quant unique mapping reads? How to specialize it only quant the unique mapped reads? Thanks a lot.
Yes, and you will have to choose a column among the results, please refer to https://www.biostars.org/p/218995/
biostars
{"uid": 336962, "view_count": 5595, "vote_count": 1}
I have a dataframe in R that looks like this: V1 T1 T2 T3 T4 T5 CXCL6 0.8536601 1.0903336 3.7633042 5.5800459 5.8477150 PPBP 0.7739450 0.3587961 0.5073359 0.2743522 0.6221722 CXCL10 0.1258370 -0.3535165 -0.7460387 3.5604672 0.1971432 CXCL11 -0.2563139 0.7117200 0.0000000 -0.2288303 0.9955557 CXCL12 0.6181279 1.7529310 1.7637760 1.2752787 1.2284810 I want to keep the rows that have values only between -1 and 1. I have tried this command but unfortunately it does not work. condition1 <- Genes[,c(2:6)] <=-1 & Genes[,c(2:6)] >=1 Genes <- Genes[condition1,] Can someone tell me where I am wrong so that I can successfully filter my dataframe.
There should be more tidy way for that, but here is my solution. rowIdx = as.numeric() for(i in 1:nrow(Genes)){ cond = Genes[-1][i,] >= -1 & Genes[-1][i,] <=1 if(all(cond == TRUE)){ rowIdx = c(rowIdx, i) } } Then subset your dataframe by `rowIdx`: GenesSubset = Genes[rowIdx,]
biostars
{"uid": 9499241, "view_count": 1276, "vote_count": 1}
Dear all, i have this table. does someone know how to de-aggregate it ? plant tissue count tomato leaf 1 tomato root 4 tomato shoot 5 solanus leaf 3 solanus root 2 solanus shoot 4 what i want is a dataframe like that: leaf root shoot tomato 1 4 5 solanus 3 2 4 thanks in advance for you tips
Buenas tardes amiga/o, You can use `dcast()`: df plant tissue count 1 tomato leaf 1 2 tomato root 4 3 tomato shoot 5 4 solanus leaf 3 5 solanus root 2 6 solanus shoot 4 require(data.table) dcast(data = df, formula = plant ~ tissue, value.var = 'count') plant leaf root shoot 1 solanus 3 2 4 2 tomato 1 4 5 Kevin
biostars
{"uid": 344161, "view_count": 1395, "vote_count": 1}
Hi~ I'm currently using 10x cellranger to analyse single cell RNA-seq data. According to their algorithm, reads mapping confidently to more than one exons will be discarded. However, there are paralogous genes in the genome that are largely identical and all the reads for such genes are discarded. Therefore, I was wondering if there is a way to change the algorithm to count the first (or a random) confident alignment. Unfortunately, I wasn't able to locate the file containing the algorithm. Any hints would be appreciated. Thanks every one!
The cellranger UMI deduplication algorithm does not handle reads that map among multiple genes, there is no "easy" way to handle this situation. You may be interested in taking a look at our quantification tool, [alevin](https://www.biorxiv.org/content/early/2018/10/24/335000), which we've designed, in part, to help deal with these cases. In addition to having a methodology for handling reads that map between multiple genes, it is much faster than cellranger.
biostars
{"uid": 356854, "view_count": 5075, "vote_count": 1}
0 down vote favorite I was experimenting Prokka and RAST annotation tools. So, I took a well-annotated swinepox virus genome from genebank *(NCBI Reference Sequence: NC_003389.1)*. I ran those sequences on Prokka and RAST Seed server at the same time. I can see that only a few **(may be around 1%)** of the genes were annotated. Most of them were predicted as **hypothetical protein**. And the results were comparable between Prokka and RAST. I would assume that these tools look for similar sequences in NCBI and find the best-match protein. But looks like that is not the case. It should be able to find that well annotated swinepox virus genome in genebank and predict most of the proteins. Also, if almost all the genes are predicted as hypothetical protein then there is not much difference between gene prediction tool like Genemark and genome annotation tool. Are there any better annotation tools ? Or this is what we get ? Or have I misunderstood the concept of annotation ? Please someone help me to understand this thing. *I have attached the image for comparison. Left one the Swinepox genome in .gb format and right one is the same genome annotated with Prokka.* Image link : https://github.com/lrjoshi/sample/blob/master/docs/annotation.PNG ![Image][1] [1]: https://github.com/lrjoshi/sample/blob/master/docs/annotation.PNG
> Also, if almost all the genes are predicted as hypothetical protein then there is not much difference between gene prediction tool like Genemark and genome annotation tool. Genome annotation is a two-step process, first you have to predict where the genes are - which tools like Augustus and GeneMark do - then you have to assign function to the predicted genes - usually by means of similarity searches using good quality databases. This is what Prokka and RAST are doing, but integrated in a pipeline. I don't know how RAST works, but prokka uses several programs to predict genes (protein coding, non-coding RNA, tRNA, rRNA, and more) from the genome, then, after having these predictions, it tries to annotated them by searching available databases. Prokka uses Prodigal for protein coding gene prediction, which I don't know if it appropriate for virus gene prediction. But the most important reason your annotation came up mostly as hypothetical proteins probably is you don't have installed an appropriate database to annotate viral genomes - you can pass one at run time with the `--proteins` option, which will have precedence over other installed databases.
biostars
{"uid": 328244, "view_count": 7044, "vote_count": 3}
I have genotype data in the form of a number of birdseed files (one for each sample), of the form: ``` Composite Element REF Call Confidence SNP_A-2131660 2 0.0060 SNP_A-1967418 2 0.0281 SNP_A-1969580 2 0.0074 SNP_A-4263484 2 0.0104 SNP_A-1978185 0 0.0034 ``` I've loaded these into a matrix in R to perform QC upon them and annotate them with the rs ids instead of the probe id, resulting in something like this: ``` > geno[1:4,1:4] TCGA.A7.A0D9 TCGA.A7.A0DB TCGA.A7.A13G TCGA.AC.A2FB rs2887286 2 2 2 2 rs1496555 2 2 2 1 rs3890745 2 2 1 2 rs10489588 1 0 0 0 > > dim(geno) [1] 693085 94 ``` I want to use 1000 Genomes data to impute genome wide SNP data, however I'm not sure how to get from this format to the format required by IMPUTE2. I've been reading through [this wiki][1] and it seems like it might be easiest to convert this to PED format and then use GTOOL to convert to IMPUTE format. Are there any existing tools that can do this reformatting directly? Or will I have to write a script to do the matrix > PED reformatting and then use GTOOL for PED > IMPUTE? [1]: http://genome.sph.umich.edu/wiki/IMPUTE2:_1000_Genomes_Imputation_Cookbook
If you use shapeit2 to prephase the chromosomes (best practice) it will accept bed files. Shapeit2 outputs files that can be read by Impute 2. And shapeit2 can read bed files directly, with the `--input-bed` flag. The filetype impute2 uses is called Oxford, and with plink1.9 you can convert most filetypes to Oxford format. I've never heard of birdseed format, but here is a script that can convert birdseed to bed: https://www.broadinstitute.org/mpg/birdsuite/howto.html
biostars
{"uid": 152660, "view_count": 3023, "vote_count": 1}
Hi I have a burning question, I want to run a script named "predict_binding.py". Its syntax is: ./predict_binding.py [argA] [argB] [argC] ./file.txt file.txt has a column of strings with the same length: string_1 string_2 string_3 ... string_n predict_binding.py works with the first 3 arguments and string_1, then the 3 arguments and string_2, and so on. That's fine, but now I have m argB, and I want to test all of them. I want to use the cluster for this, and this looks like a perfect job for parallel, isn't it? After reading the manual and spending hours to try to make it work I realised I need some help. What works so far (and is trivial) is: parallel --verbose ./predict_binding ::: argA ::: argBi ::: argC ::: ./file.txt This gives the same result as: ./predict_binding.py argA argBi argC ./file.txt And indeed the flag --verbose says that the command looks like `./predict_binidng.py argA argBi argC ./file.txt ` but I want to test all arg2, so I made a file called args.txt, which looks like this: argA argB1 argC ./file.txt argA argB2 argC ./file.txt ... argA argBm argC ./file.txt If I do: cat args.txt | parallel --verbose ./predict_binding.py {} I get an error from ./predict_binding saying: `predict_binding.py: error: incorrect number of arguments` And verbose says that the command looks like: `./predict_binding.py argA\ argBi\ argC\ ./file.txt` So, maybe those backslashes are affecting the input of ./predict_binding? How could I avoid them? I have tried using double and single quotations " ', backslash \, backslash with single quote \', none has work! I also tried: cat ./args.txt | parallel --verbose echo | ./predict_binding Same error as above. And also I tried to use a function like: binding_func ( ) { ./predict_binding argA $1 argC ./file.txt} Interestingly, binding_func works for: parallel binding_func ::: argB1 But if I do: parallel binding_func ::: argB1 argB2 It gives the result for one arg but fails (same error as above) for the other. If I put only argB1 in the args.txt file and do: cat args.txt | parallel --verbose binding_func {} It fails miserably with the same error: `predict_binding.py: error: incorrect number of arguments` It seems a very trivial and easy problem but I haven't been able to solve it }:( I would appreciate very much any help provided. :)
Final Edit: Calling it like this will work: cat vals.txt | parallel --verbose "echo -e mhc_i/examples/input_sequence.fasta | mhc_i/src/predict_binding.py smm {} 9" parallel --verbose "echo -e mhc_i/examples/input_sequence.fasta | mhc_i/src/predict_binding.py {}" ::: smm ::: HLA-C*15:02 HLA-E*01:01 ::: 9 The script predict_binding.py looks also for sys.stdin (l301-303) and adds an empty argument to the list if you call it with parallel in the way you tried. I'm not sure why this happens, but if you pipe the input fasta file to the script you can also run it with parallel. --- Old Answer: I am not sure if I understand the question correctly but for me this seems to work if the 2nd argument is a list of arguments: `parallel --verbose ./predict_binding ::: argA ::: argB1 argB2 argB3 ::: argC ::: ./file.txt` `parallel --verbose ./predict_binding ::: argA ::: argB* ::: argC ::: ./file.txt` (e.g. if arguments are files) parallel --verbose ./script.sh ::: argA ::: argB* ::: argC ::: file.txt ./script.sh argA argB argC file.txt ./script.sh argA argB1 argC file.txt ./script.sh argA argB2 argC file.txt
biostars
{"uid": 182136, "view_count": 18973, "vote_count": 1}
Hi guys, Question, I have a couple of sam files that I got from aligning with Bowtie2. I am trying to sort through them to remove the usual stuff (multi-mapping etc etc) after reading posts and guides etc etc. So far I've been successful in taking my sam files, running `sed '/XS:/d' algn.sam > mappedonce.sam` and getting rid of all those with an XS flag. When I then run the following :- ``` samtools view -c -f 4 mappedonce.sam 749598 samtools view -c -F 4 mappedonce.sam 12116420 ``` I can see that I have 749598 reads that are unaligned which I want to remove. So I ran this as per posts on biostars samtools view -F 4 -o mapped1_mapped.sam mappedonce.sam However, when I then take this file after running the -F 4 filter, if I try and run anything else such as flagstat or even the `samtools view -c -F 4 mappedonce.sam` again, I get the following error: ``` [E::sam_parse1] missing SAM header [W::sam_read1] parse error at line 1 [main_samview] truncated file. ``` So what is samtools doing to my file? Is there a reason why samtools would remove a header line and truncate my file? Am I doing something wrong? Consequently, if anyone has any non-samtools solutions to remove these 749598 reads I'd be open to that!
You need the `-h` option in `samtools view` so the header is also printed out.
biostars
{"uid": 174739, "view_count": 11423, "vote_count": 2}
Hi, all. I will output MDS plot with ggplot2. But, I couldn’t read the data because the class of the data wasn’t support in ggplot2. Is there a way to convert “dist” to “data.frame”? Although I tried “as.data.frame()“, I was not able to do. The following error messages was outputed. > as.data.frame(d) Error in as.data.frame.default(d) : cannot coerce class ‘"dist"’ to a data.frame I'm a beginner in bioinformatics, so sorry for the basic question. Could you please advise me? Thank you very much for your appreciated!
You have a specific function designed to perform this in the `metagMisc` package : devtools::install_github("vmikk/metagMisc") library("metagMisc") dist2list(d, tri = TRUE) You have also plenty other solutions as you may find [here][1]. [1]: https://stackoverflow.com/questions/23474729/convert-object-of-class-dist-into-data-frame-in-r
biostars
{"uid": 9488743, "view_count": 3077, "vote_count": 2}
<p>I am thankful for <a href='http://www.biostars.org/p/18211/'>Obi Griffith sharing his code on how to make heatmaps</a> in R where you can add row and column information together with the color mapping information. I use it all the time to make pictures of hierarchically clustered expression profiles measured by expression arrays. </p> <p>Usually I scale the input matrix on rows (probesets) to Z-scores in order to make weigh equally in the clustering. The colors of the heatmap are mapped linearly to the Z-scores. \</p> <p>The problem is, when you have a few extremes in your data set, they tend to make your heatmap a little faint. In the attached heatmap you can see that the Z-scores range from somewhere around -10 to 10, however, about 99% is in the range of -4 to 4. I would like to change the way colors are mapped to the Z-scores in such a way that the green-to-red goes from -4 to 4 and the colors of the Z-scores outside this range just do not become any more green or red then they are already, any ideas?</p> <pre><code># import gplots for greenred colors library(gplots) # generate random matrix x&lt;-replicate(100,rnorm(100,0,1)) # draw heatmap heatmap.3(x,density.info="histogram",col=greenred) # introduce outliers x[1,1] &lt;- -10 x[2,2] &lt;- 10 # draw heatmap with outliers heatmap.3(x,density.info="histogram",col=greenred) </code></pre> <p><img src='http://s24.postimg.org/pakmr8sv9/heatmap_without_Extremes.png%20without_extremes' alt='Without extremes' /> <img src='http://s10.postimg.org/6npznmu3d/heatmap_with_Extremes.png%20with_extremes' alt='With extremes' /></p>
<p>You can use the <code>breaks</code> argument to do this, from the <code>heatmap.2</code> documentation:</p> <blockquote>breaks (optional) Either a numeric vector indicating the splitting points for binning x into colors, or a integer number of break points to be used, in which case the break points will be spaced equally between min(x) and max(x)</blockquote> <p>To use to achieve your desired effect:</p> <pre><code>#breaks for the core of the distribution breaks=seq(-4, 4, by=0.2) #41 values #now add outliers breaks=append(breaks, 10) breaks=append(breaks, -10, 0) #create colour panel with length(breaks)-1 colours mycol &lt;- colorpanel(n=length(breaks)-1,low="green",mid="black",high="red") #now plot heatmap - I'm using heatmap.2, but am assuming this will work with Obi's heatmap.3 heatmap.2(x,density.info="histogram",col=mycol, trace='none', breaks=breaks) </code></pre> <p>The major drawback is that the outliers are a lot less obvious, but clearly the heatmap overall is a lot less 'washed out'.</p> <p>Results:</p> <p>Before <img src='http://bioinf1.ncl.ac.uk/biostar/heatmap_before.png' alt='enter image description here' /></p> <p>After <img src='http://bioinf1.ncl.ac.uk/biostar/heatmap_after.png' alt='enter image description here' /></p>
biostars
{"uid": 73644, "view_count": 25341, "vote_count": 2}
Hi BioStars, I am using pysam to count read from bam file. I use pysam.AlignmentFile to read bam file and pysam.AlignmentFile.count to do read count, but it requires bai file which is an index should in the same path where bam file is. However I have no writing permission in bam file path. Is it possible to input index bai file from other path, I mean save bam file and its bai file in different path? Thanks very much if some guys can help this!
Easiest is probably to create a symbolic link to the bam (`ln - s`) in a directory where you can write, or `/tmp/`, and write the index there. You can probably fix this using the `os` module
biostars
{"uid": 228638, "view_count": 1677, "vote_count": 1}
Hello all, I have barcode count data corresponding to the viability of 25+ pooled bacterial strains under various conditions. The marginal distribution of untreated strain counts appears to be Negative Binomial. I'm trying to use DESeq2 to analyze these data, using a matrix of strains ("genes") as rows and conditions as columns. Since the variation of counts between most conditions for most strains is very large, but between replicates is relatively small, it seems sensible to estimate dispersions (in this case) on a gene- and condition-wise basis. The language in the DESeq2 vignettes and pre-print seems to suggest the dispersion estimates are "gene-wise". So if you run `DESeq()` followed by `plotDispEsts()`, each point corresponds to the variance estimate of a gene across conditions (in my case, strain), or the variance estimate between replicates of a gene under one condition? I think the conceptual difference I'm talking about is the same as that between `blind=TRUE` and `blind=FALSE` in the `rlog()` and `varianceStabilizingTransformation()` functions. Finally, if DESeq2 does estimate dispersions on a solely gene-wise basis, would it be reasonable for me to estimate the dispersions of my data subsetted by each condition in turn, and then feed those results into my whole `DESeqDataSet` object using `dispersions()`? Many thanks for taking the time to read, and for any suggestions you might have. Eachan
The *estimateDispersion* function will estimate gene wise dispersions using all columns. Although a GLM fit (based on within group variances and means) is done before estimating so the differences in conditions are also incorporated. So I think you don't have to (and probably shouldn't) use condition-wise estimation. By the way, why do you want to use DESeq2 here? I mean with a dataset with 25 rows only..
biostars
{"uid": 171611, "view_count": 4288, "vote_count": 2}
Are there any publicly available raw sequencing data for COVID-19? In BAM for FASTQ format?
`sra-explorer` [search with][1] `COVID`. Download using instructions here : https://www.biostars.org/p/366721/ [NCBI's SARS-CoV-2 genome's page][2] has direct links to SRA accessions (scroll down the page). [1]: https://sra-explorer.info/ [2]: https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs/
biostars
{"uid": 425537, "view_count": 1420, "vote_count": 1}
Hi Everyone, I recently heard about the genome in a bottle project. I found raw fastq data for NA12878 samples from their GitHub repository. Now I'm trying to reproduce their analysis. https://github.com/genome-in-a-bottle/giab_data_indexes/blob/master/NA12878/sequence.index.NA12878_Illumina_HiSeq_Exome_Garvan_fastq_09252015 I would like to know if it's available a reference code to compare my pipeline (not only the final result but the code). Thank you in advance, AY
Hi Yussab, as you did not replay, so i do not know what are you planning to do exactly. it seems that you do have your own pipeline which mean you have the final VCF where it's generated from NA12878. i know these 2 tools to compare you result with GIAB Data set. (I do not know if there is something else available) 1- Hap.py (https://github.com/Illumina/hap.py): hap.py truth.vcf query.vcf -f confident.bed -o output_prefix -r reference.fa 2-RTG tools (https://github.com/RealTimeGenomics/rtg-tools): generate SDF file from Fasta file (Your reference): rtg format "Your_Fasta_File".fasta -o "File_Name_ToOut_Put_Result" Compare the tow VCF file: rtg vcfeval -b truth.vcf -c query.vcf -o Directary_To_OUTPUT_THE_RESULT -t PATH_TO_SDF_FILE -e GIAB_BED_FILE.bed --region= Your_Bed_File.bed I highly recommend you to read the tools manual for better understanding.
biostars
{"uid": 463943, "view_count": 2647, "vote_count": 4}
Hello everybody Can any one help me understand the MDS plot for two groups of samples like control vs. treated we generate from edgeR or Deseq2. I understand the figure as it separates the control from treated very well, but if we have the y-axis as the logFC, I don't get what the x-axis is? What is the x-axis in such 2D figure? Thanks
Hello, the y-axis in a MDS plot does *not* represent logFC, and neither does the x-axis. Provided you have generated the plot according to the workflow, <a href="https://www.bioconductor.org/help/workflows/rnaseqGene/#mds-plot">RNA-seq workflow: gene-level exploratory analysis and differential expression</a>, on Bioconductor, your x- and y-axes will be representative of **Euclidean distances** between your samples. These Euclidean distances will be produced/repeated across multiple dimensions in different ways, but the standard way of representing MDS is to just plot the Euclidean distances from the first 2 dimensions, with x-axis being *Dimension 1* and y-axis being *Dimension 2*. The dimensions are ordered based on how well they fit your samples (for further reading, take a look at the <a href="https://stat.ethz.ch/R-manual/R-devel/library/stats/html/cmdscale.html">cmdscale</a> *Details* section. MDS is commonly used in genetic studies to find relationships between samples based on genotype, but, as you can see, it's also used in RNA-seq. For information on a related method, i.e., principal components analysis, please see my threads here: - https://www.biostars.org/p/280615/#280634 - https://www.biostars.org/p/282685/#282691 - https://www.biostars.org/p/271694/ ------------------- For what it's worth: - A volcano plot represents logFC (x-axis) and negative log10 P- or adjusted P-value (y-axis) - A MA plot represents average expression (x-axis) and logFc (y-axis)
biostars
{"uid": 287415, "view_count": 9403, "vote_count": 2}
Hi everyone, I have a list of snps with string chromosome names that I want to replace with a sequential numerical value (preparing a qqman input file). For example, turn this: chrA chrA chrA chrB chrB chrC chrC chrC chrC ... chrZ chrZ into this 1 1 1 2 2 3 3 3 3 .. 26 26 At the moment I am doing this with a script, using something similar to this: sed 's/chrA/1/g' sed 's/chrB/2/g' sed 's/chrC/3/g' .. sed 's/chrZ/26/g' However, my actual genome has a large number of contigs, so this command takes a lot of time to run. Is there a more efficient way to do this?
Here's a solution with `replace` subcommand ([usage](http://bioinf.shenwei.me/csvtk/usage/#replace)) of [csvtk](http://bioinf.shenwei.me/csvtk/download), just download the `.tar.gz` file, decompress and you can run :) First you have to prepare a mapping file, which is a **plain tab-separated text file**, you can easily use a spreadsheet software to create and export. $ more mapping.tsv chrA 1 chrB 2 chrC 3 I guess the SNP data file should be a tab-delimited file too. Here's a dummy one: $ more data.tsv chrA A z chrA A x chrA A c chrB B v chrB B d chrC C tx chrC C t chrC C x chrC C z Then use `csvtk` to edit the SNP data file: $ ./csvtk -H -t replace -f 1 -p '(.+)' -r '{kv}' -k mapping.tsv data.tsv [INFO] read key-value file: mapping.tsv [INFO] 3 pairs of key-value loaded 1 A z 1 A x 1 A c 2 B v 2 B d 3 C tx 3 C t 3 C x 3 C z The long-option version would be easier to understand: ./csvtk --no-header-row --tabs replace --fields 1 --pattern '(.+)' --replacement '{kv}' --kv-file mapping.tsv data.tsv **PS: this is a general method not limited to this case. `sed` is good for single replacement, `csvtk replace -k` can handle multiple replacements well, which is written in [Go](https://golang.org/) with good performance.** **PS2: [seqkit](http://bioinf.shenwei.me/seqkit/) has exactly the same function to handle FASTA/Q files.**
biostars
{"uid": 231006, "view_count": 2151, "vote_count": 1}
<p>Do you know any <strong>public</strong> scientific SQL server ?</p> <p>for example, I would cite:</p> <ul> <li>UCSC <a href='http://genome.ucsc.edu/FAQ/FAQdownloads#download29'>http://genome.ucsc.edu/FAQ/FAQdownloads#download29</a></li> <li>ENSEMBL <a href='http://uswest.ensembl.org/info/data/mysql.html'>http://uswest.ensembl.org/info/data/mysql.html</a></li> <li>GO <a href='http://www.geneontology.org/GO.database.shtml#mirrors'>http://www.geneontology.org/GO.database.shtml#mirrors</a></li> </ul> <p>(I'll give a +1 to each correct answer)</p>
1000 Genomes: since June 16, 2011: http://www.1000genomes.org/public-ensembl-mysql-instance mysql -h mysql-db.1000genomes.org -u anonymous -P 4272
biostars
{"uid": 474, "view_count": 11197, "vote_count": 22}
Hi All, I did a transcriptome assembly of Illumina SE and PE reads using Trinity, but the N50 values for my assembly are very low. Here is the summary: ``` Total trinity 'genes': 748144 Total trinity transcripts: 916206 Percent GC: 43.83 Contig N10: 1765 Contig N20: 1064 Contig N30: 677 Contig N40: 477 Contig N50: 369 Median contig length: 253 Average contig: 364.39 Total assembled bases: 333854406 ``` In addition to Illumina PE and SE data, I also have PacBio full-length transcripts data. Is there any way I can perform a hybrid transcriptome assembly? Need your ideas on it. PBcR with Celera assembler seems to be one option that corrects PacBio reads and performs hybrid assembly, but 1) it seems to be significantly slow as mentioned by authors, 2) it is not clear from its documentation whether it will use Illumina reads beyond the reads correction step i.e. for final assembly and 3) no example provided by authors on how to use it for 'transcriptome' hybrid assembly. These issues with PBcR makes me think that it is a slow error correction tool which uses only the PacBio reads for final assembly. Are there any other options for de novo hybrid assembly of transcriptome? Thanks Bade
<p>If your PacBio reads are already corrected, you can use them with Trinity (you don&#39;t mention if they are or not corrected). Quoting from <a href="https://groups.google.com/forum/#!topic/trinityrnaseq-users/Pa_3gnvVnks">this</a> post from Trinity mailing list:</p> <p>&quot;You can incorporate corrected pacbio reads into Trinity using the --long_reads parameter.&quot;</p> <p>rnaSPAdes is based on SPAdes, so it should be able to take both types of data, and MIRA as well, but both of them will probably choke if the transcriptome is too large or complex.</p> <p>If your PacBio are not corrected, correct them and try one of the above suggestions.</p>
biostars
{"uid": 160072, "view_count": 2936, "vote_count": 1}
I have a large set of RNA-seq expression data in a DGEList object, and I want to plot the epxression data between two factors for specific genes temporally. I started out by subsetting the data into a smaller matrix and then realised that was silly, and I should be able to plot it from the DGEList object that the data is stored in. Each timepoint has three replicates so I would also be looking to take a mean of those replicates before plotting. Would subsetting the data first still be the best option or am I missing a far quicker and easier option. DGEList Count: Gene Symbol Sample1 Sample2 Sample3 Sample4 Sample5 Sample6 Sample7 Sample8 ... Gene1 54 55 53 78 79 74 81 82 Gene2 23 21 22 45 44 47 61 62 Gene3 74 75 73 81 82 80 83 88 Gene4 2 3 1 10 9 8 12 11 ... MetaData: Sample Name ... Day [1,] Sample1 D0 [2,] Sample2 D0 [3,] Sample3 D0 [4,] Sample4 D3 [5,] Sample5 D3 [6,] Sample6 D3 [7,] Sample7 D7 [8,] Sample8 D7 ... Using the examples above, what I am trying to do is draw an expression line plot for Gene 2 and Gene 3, including averaging the expression levels on each day - but as I have two factors the samples come from two factors and so would need to be separate,
Okay, I get the feeling that this does not have to be anything special for now (in terms of a 'polished' plot). So, you could try this: df time gene1 gene2 gene3 sample1 day1 1 2 3 sample2 day1 4 10 3 sample3 day2 1 2 3 sample4 day2 1 2 3 sample5 day3 1 2 3 sample6 day3 1 2 3 Summarise by mean: df <- aggregate(df[,2:ncol(df)], df[1], mean) df time gene1 gene2 gene3 1 day1 2.5 6 3 2 day2 1.0 2 3 3 day3 1.0 2 3 Plot plot(1, type="n", ylab="Expression", xlab="Day (1, 2, 3)", xlim=c(1,3), ylim=c(0,10)) lines(gene1 ~ time, data=df, lwd=2, col="royalblue") lines(gene2 ~ time, data=df, lwd=2, col="red4") lines(gene3 ~ time, data=df, lwd=2, col="forestgreen") <a href="https://ibb.co/hP4FU7"><img src="https://preview.ibb.co/mOG7bn/d.png" alt="d" border="0"></a>
biostars
{"uid": 308872, "view_count": 2659, "vote_count": 1}
I'm doing an analysis of sequence features in 18S rRNA sequences. I have downloaded data from SILVA database. Unfortunately, typical vertebrate has more than a single 18S sequence. I know that most of them are considered pseudogenes, but unless I check them one by one, there's no way to guess which one is a canonical (reference) version. I have roughly 1000 sequences to verify from 150 organisms, so I'd prefer not to do it by hand. Where to find or how to identify canonical version of 18S rRNA?
The SSU gene of S. cerevisiae (Z75578) is usually used as canonical to mark start and end positions of V1-V9 regions. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0087624 there's table 3 in supplemental materials you need.
biostars
{"uid": 142128, "view_count": 2407, "vote_count": 2}
For Tumor Mutation Burden (TMB) usually people calculates non_synonym_mutations / Mb. In this Ilumina's tech report (https://www.illumina.com/content/dam/illumina-marketing/documents/products/whitepapers/trusight-tumor-170-tmb-analysis-white-paper-1170-2017-001.pdf) they calculate TMB for TruSight Tumor 170 with both, all mutations and non_synonym. What strikes me is that the all_mutations have a higher correlation with WES. Is this an artifact because the low number of genes and the fact that this is an in silico selection of the WES? other papers have good correlations with 300 and 500 genes but probably 170 is too low. <pre> Results of In Silico Studies WES data from 5336 TCGA samples were filtered and analyzed in silico using TruSight Tumor 170. TMB estimated from the TruSight Tumor 170 targeted regions showed a high correlation to TMB estimated from WES, with R 2 correlation values of 0.91 for total mutations (Figure 2A) and 0.90 for nonsynonymous mutations (Figure 2B) </pre> Seems that if you want a proxy for neo-antigens formation for improving immunotherapies, then counting non_synonyms would give you a better proxy. Doesn't matter if you use all_mut or synonym? Could someone point me to a good paper measuring these correlations? Which is the limit for the number of genes where correlation starts to drop?
Correlation is a rather poor evaluation metric this case, since it can be driven by hyper-mutated outliers that are easy to get right with small panels, i.e. the correlation depends on both the mutation rate and the panel size. Including silent mutations will give you a more accurate total somatic mutation rate because you add a few more data points and reduce the sampling variance. This can - in theory - also provide a slightly more accurate estimate of the true mis-sense rate. I don't think there is a paper, because it's straightforward to do the power calculation or the downsampling from TCGA data. Maybe the recent one from the Van Allen group at DFCI where they compare whole-exome and small and large panels is helpful.
biostars
{"uid": 262840, "view_count": 8870, "vote_count": 3}
<p>Hi,</p> <p>I'm looking for an easy way to <strong>retrieve all the genes in a list that are associated with a certain GO term</strong>, preferably using R/Bioconductor packages. I'm <strong>not</strong> interested in under/overrepresentation or enrichment.</p> <p>For instance, say I have a list of 1000 genes and I want to create a sublist with only the genes known to be involved in 'heart development'.</p> <p>Thanks!</p>
<p>Using <a href='http://www.bioconductor.org/packages/2.2/bioc/html/biomaRt.html'>biomaRt</a> within R:</p> <pre><code>library(biomaRt) ensembl = useMart("ensembl",dataset="hsapiens_gene_ensembl") #uses human ensembl annotations #gets gene symbol, transcript_id and go_id for all genes annotated with GO:0007507 gene.data &lt;- getBM(attributes=c('hgnc_symbol', 'ensembl_transcript_id', 'go_id'), filters = 'go_id', values = 'GO:0007507', mart = ensembl) </code></pre>
biostars
{"uid": 52101, "view_count": 31262, "vote_count": 20}
I have a GFF annotation for my sequences where many protein sub-features are overlapping. I want to extract the longest of those sub-features (i.e. one sub-feature for each sequence). For example, the following is how my gff file looks: ``` chrA01:14912487-14917603(-)_Name=Name1 Source1 repeat 6 5116 . - . ID=repeat_region1 chrA01:14912487-14917603(-)_Name=Name1 Source1 inverted 6 7 . - . Parent=repeat_region1 chrA01:14912487-14917603(-)_Name=Name1 Source1 LTR 6 5116 . - . ID=LTR_retrotransposon1;Parent=repeat_region1;similarity=95.70;seq_number=666 chrA01:14912487-14917603(-)_Name=Name1 Source1 ltrs 6 302 . - . Parent=LTR_retrotransposon1 chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 535 835 1.7e-16 - . Parent=LTR_retrotransposon1;reading_frame=0;name=INT_crm chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 535 874 0 - . Parent=LTR_retrotransposon1;reading_frame=0;name=INT_reina chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 547 772 1.9e-28 - . Parent=LTR_retrotransposon1;reading_frame=0;name=INT_del chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 1812 2124 8.3e-41 - . Parent=LTR_retrotransposon1;reading_frame=1;name=RNaseH_galadriel chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 1812 2127 4.6e-15 - . Parent=LTR_retrotransposon1;reading_frame=1;name=RNaseH_v_clade chrA01:14912487-14917603(-)_Name=Name1 Source2 protein 1812 2127 1.2e-16 - . Parent=LTR_retrotransposon1;reading_frame=1;name=RNaseH_athila chrA01:14912487-14917603(-)_Name=Name1 Source1 ltrs 4815 5116 . - . Parent=LTR_retrotransposon1 chrA01:14912487-14917603(-)_Name=Name1 Source1 inverted 5115 5116 . - . Parent=repeat_region1 ### chrA01:18337410-18342821(-)_Name=Name2 Source1 repeat 1 5411 . - . ID=repeat_region2 chrA01:18337410-18342821(-)_Name=Name2 Source1 inverted 1 2 . - . Parent=repeat_region2 chrA01:18337410-18342821(-)_Name=Name2 Source1 LTR 1 5411 . - . ID=LTR_retrotransposon2;Parent=repeat_region2;similarity=98.93;seq_number=794 chrA01:18337410-18342821(-)_Name=Name2 Source1 ltrs 1 374 . - . Parent=LTR_retrotransposon2 chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 427 1435 0 - . Parent=LTR_retrotransposon2;reading_frame=1;name=INT_crm chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 439 1390 1.8e-38 - . Parent=LTR_retrotransposon2;reading_frame=1;name=INT_TF chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 481 1435 0 - . Parent=LTR_retrotransposon2;reading_frame=1;name=INT_v_clade chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 1657 1990 3e-18 - . Parent=LTR_retrotransposon2;reading_frame=1;name=RNaseH_v_clade chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 1657 1990 6.8e-17 - . Parent=LTR_retrotransposon2;reading_frame=1;name=RNaseH_csrn1 chrA01:18337410-18342821(-)_Name=Name2 Source2 protein 1657 1990 2.7e-35 - . Parent=LTR_retrotransposon2;reading_frame=1;name=RNaseH_reina chrA01:18337410-18342821(-)_Name=Name2 Source1 ltrs 5038 5411 . - . Parent=LTR_retrotransposon2 chrA01:18337410-18342821(-)_Name=Name2 Source1 inverted 5410 5411 . - . Parent=repeat_region2 ### ``` I want to extract the longest or the best RNaseH regions from each sequence. I am able to parse the gff to get RNase annotations. But, I am not sure how to proceed next: ``` from gffutils.iterators import DataIterator from Bio import SeqIO input_filename = 'mygff.gff' infasta = list(SeqIO.parse('myseqs.fasta', 'fasta')) features=DataIterator(input_filename) for feature in features: if "name=RNaseH" in str(feature): print str(feature) #check lengths #check e-values ```
Your question is a little confusing, but I'll try my best, hopefully we'll get somewhere. Firstly I wonder (just of curiosity) how did you come across this `gffutils` tool. GFF is a flat text file. One would usually just write a parser to get relevant info from the file. But this `gffutil` thing is actually pretty cool and since you want to get actual sequences as well, this tool should do it all for you. Firstly, are you familiar with python `dir` function? It returns a list of valid attributes for that object. e.g `dir("string")` will return all methods available to a "string". You can also use ipython to tab expand methods.This is a good exploratory exercise of new methods, particular for new tools like this one, since little info available online. You can further call `help("string".endswith)` to get info on how to apply particular method. Do this for your own knowledge: ``` import sys from gffutils.iterators import DataIterator myGFF = sys.argv[1] myFasta = sys.argv[2] for feature in DataIterator(myGFF): print dir(feature) break ``` This should return a list like this: ['__class__', '__delattr__', '__dict__', '__doc__', '__eq__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__len__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', '__weakref__', 'astuple', 'attributes', 'bin', 'calc_bin', 'chrom', 'dialect', 'end', 'extra', 'featuretype', 'file_order', 'frame', 'id', 'keep_order', 'score', 'seqid', 'sequence', 'sort_attribute_values', 'source', 'start', 'stop', 'strand] Don't worry about things with double underscore! All other things you should experiment with. I believe this is along the line what you are asking for ``` import sys from gffutils.iterators import DataIterator myGFF = sys.argv[1] myFasta = sys.argv[2] for feature in DataIterator(myGFF): if feature.featuretype == "RNaseH": print feature.seqid, feature.featuretype, len(feature), feature.sequence(myFasta) ``` This will return first column value (usually chromosome name), the feature type (in you case RNaseH), the length of that feature and the most awesome thing in my opinion is this last method `feature.sequence()` which returns the sequence fragment of you requested feature, using coordinates from the gff file. That is really cool I think. I'm not sure how to help you with "the longest/ the best RHaseH reagion" thing since it something very specific to you, but you should be able to impose some sort of test on `len(feature)` to tease out your sequence of interest. p.s also have a look at this post https://www.biostars.org/p/138849/
biostars
{"uid": 140334, "view_count": 4588, "vote_count": 2}
Hi folks! I need your help with a ggplot2 representation. I have a data set that has the following structure: log2FoldChange Sequence_biotype Knockdown -1.40 LTR A -1.11 DNA B -3.46 Protein A -1.25 Protein C 1.03 DNA B ... ... ... I am plotting the foldChanges as boxplots, one for each `Sequence_biotype`. I have the knockdown variable faceted. What I'm trying to do is to do a one sample Wilcoxon test for each boxplot, comparing the log2FoldChange to 0 (to see if there is a significant change). That can be achieved with the following code: `wilcox.test(x = data, mu = 0)` when the data is grouped by Sequence_biotype and Knockdown. My question is, how could I introduce the results of the wilcox test to the plot as labels or how could I compute it directly in the plot (I have been trying with `stat_compare_means(method = "wilcox", paired = FALSE)` from ggpubr package but all the pvalues are piling up to the same spot) Thank you before hand, any help is appreciated! Best, Jordi
In case anyone faces the same problem, I will paste the code I used to get the plot. # Subset the dataset into different knockdowns A = com_sig.df[com_sig.df$Knockdown=="A",] B = com_sig.df[com_sig.df$Knockdown=="B",] C = com_sig.df[com_sig.df$Knockdown=="C",] # Compute the test for the first knockdown and the position in the plot stat.test_A = compare_means(log2FoldChange ~ 1, paired = FALSE, data = A, method = "wilcox.test", group.by = "Sequence_biotype", mu = 0) %>% mutate(y.position = 4.5) # Add a label to identify the knockdown stat.test_A$Knockdown = "A" # Same for second knockdown stat.test_B = compare_means(log2FoldChange ~ 1, paired = FALSE, data = B, method = "wilcox.test", group.by = "Sequence_biotype", mu = 0) %>% mutate(y.position = 4.5) stat.test_B$Knockdown = "B" # Same for the third one stat.test_C = compare_means(log2FoldChange ~ 1, paired = FALSE, data = C, method = "wilcox.test", group.by = "Sequence_biotype", mu = 0) %>% mutate(y.position = 4.5) stat.test_C$Knockdown = "C" # Combine all 3 datasets into 1 stat.test = do.call(rbind, list(stat.test_A, stat.test_B, stat.test_C)) # change column name of the variable used to compute the wilcox test names(stat.test)[names(stat.test)==".y."] = "log2FoldChange" com = ggboxplot(data = com_sig.df, x = "Sequence_biotype", y = "log2FoldChange", fill = "Sequence_biotype", facet.by = "Knockdown") + geom_hline(yintercept = 0, linetype = "dashed", color = "red") + theme_bw() + theme(legend.position = "none", axis.title.x = element_text(hjust = .975, vjust = -.6), plot.title = element_text(hjust = .5), axis.text.x = element_blank(), strip.background = element_blank(), strip.text.x = element_blank()) + geom_text(data = stat.test, aes(y=y.position,label = p.signif)) com [Plot Generated][1] [1]:https://ibb.co/hXY4rbT
biostars
{"uid": 448032, "view_count": 3191, "vote_count": 1}
Dear All, I have a data matrix that has 17 samples and over 800 genes that belong to ten different gene families. I want to show these gene families in heatmap by marking them. I performed heatmap but I do not know how to show gene families in the heatmap graph. Anyone knows how to that?
Yes, as per Sean, in *ComplexHeatmap*, you can segregate your heatmap into 'blocks' of different genes using the 'split' parameter. You can end up with nice heatmaps like this: https://www.biostars.org/p/273155/#273172 **Edit April 16, 2019: skip to the working example, here: https://www.biostars.org/p/286187/#286507**
biostars
{"uid": 286187, "view_count": 13985, "vote_count": 2}
My expression data frame pretty small subset of it is as such gene value_1 value_2 XLOC_000060 3.662330 0.3350140 XLOC_000074 2.568130 0.0426299 I been struggling with ggplot how to trasform the data so that I can plot it in ggplot i basically how do I define the factor etc. saw few examples but I guess im still not able to replicate what they did with the sample data I end up getting this error `Error in eval(expr, envir, enclos)` its a pretty fundamental error I suppose which I fail to figure out.. Any help would be highly appreciated
You have to *tidy* your data to plot it effectively with `ggplot2`. There are many ways to do this, below I show two (assuming you want to plot a heatmap). Note the difference in data rearrangement between `d` and `dd`. Also, highly recommend reading the [tidyr vignette][1] and the *Tidy data* section in the book *"R for data science"* by [Hadley Wickham][2] (creator of ggplot2, tidyr and reshape2). The online version of this book is available [here][3]. # create sample (untidy) dataset. d <- read.table(header = TRUE, text = "gene value_1 value_2 XLOC_000060 3.662330 0.3350140 XLOC_000074 2.568130 0.0426299") d gene value_1 value_2 1 XLOC_000060 3.66233 0.3350140 2 XLOC_000074 2.56813 0.0426299 # tidy it using tidyr package. library(tidyr) dd <- d %>% gather(sample, value, -gene) dd gene sample value 1 XLOC_000060 value_1 3.6623300 2 XLOC_000074 value_1 2.5681300 3 XLOC_000060 value_2 0.3350140 4 XLOC_000074 value_2 0.0426299 # plot heatmap. ggplot(dd, aes(x = sample, y = gene, fill = value)) + geom_tile() # tidy it using reshape2 package. library(reshape2) dd <- melt(d, variable.name = "sample") ggplot(dd, aes(x = sample, y = gene, fill = value)) + geom_tile() [1]: https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html [2]: http://hadley.nz [3]: http://r4ds.had.co.nz
biostars
{"uid": 231978, "view_count": 6319, "vote_count": 2}
Hi. I'm using pheatmap to create heatmaps. I've made a heatmap and used the clustering option (method = correlation) and the resulting heatmap has a certain row order based on the clustering. I want to make another heatmap using pheatmap that then has the same row order as the first one. Is there a way to specifically set the order of a second pheatmap based on the first? I've been trying using the tree_row$order argument on the pheatmap object that I create. I think I can use this to set the order of the other, but then I don't know how to get pheatmap to display/write to file the changed pheatmap object. Here's an example of some code: > # create a pheatmap >hm_1<-pheatmap(mat1, cluster_rows=TRUE) hm_2<-pheatmap(mat1, cluster_rows=FALSE) > > # get the row order of another pheatmap and set the first to the second >hm_2$tree_row$order <-hm_1$tree_row$order This works, but then I have this hm_2 object. How do I get it to display the resulting map again without calling the pheatmap argument? And is there a batter way to do this?
you can reorder your second data based on the order of the first heatmap (where clustering has been applied) in my case this was useful to display non-transformed values while plotting the heatmap based on transformed values > 1) create the hm pheatmap using transformed and normalized data 2) > extract row and col orders from it as you suggested > > # capture order for rows from first hm row.order = > ph$tree_row$order col.order = ph$tree_col$order > > 3) reorder the non-transformed data based on the order of the data in > the heatmap extracted above values.to.show = raw[row.order, col.order] > > 4) plot new pheatmap using the same data but overlay numbers from the raw matrix instead of the transf+norm numbers > ... display_numbers=values.to.show, ... Hope this helps and can be recycled!
biostars
{"uid": 214538, "view_count": 12330, "vote_count": 3}
Hi all, I need to run a PCA analysis on a microarray dataset to estimate which are the most relevant features. How can I do it? Is it possible? I tried with `scikit-learn` but I was unable to come with the relevant genes. I did it like this: from sklearn.decomposition import PCA import numpy as np # X is the matrix transposed (n samples on the rows, m features on the columns) - it is represented as a numpy array X = np.array([[-1, -2, 5, 1], [-3, -1, 1, 0], [-3, -2, 0, 2], [1, 1, 1, 3], [2, 1, 1, 4], [3, 2, 0, 5]]) # suppose we want the 2 most relevant features nf = 2 pca = PCA(n_components=nf) pca.fit(X) X_proj = pca.transform(X) With `X` array([[-1, -2, 5, 1], [-3, -1, 1, 0], [-3, -2, 0, 2], [ 1, 1, 1, 3], [ 2, 1, 1, 4], [ 3, 2, 0, 5]]) it returns `X_proj` array([[-2.9999967 , 3.26498171], [-3.53939268, -1.18864266], [-2.77013188, -2.15637734], [ 1.67612209, 0.03059917], [ 2.87464655, 0.35674472], [ 4.75875261, -0.30730559]]) How can I say which are the selected features? Is there another way to do it (also in R for example)? Thanks
Hi, be careful because I think you are making some confusion about the concept of "components" in PCA. The PCA doesn't return you the most relevant "features" in the dataset; it rather returns two vectors (or more, depending on how many components you calculate), corresponding to the coordinates of each element on a plane that divides the datasets into two parts. For a quick explanation of how PCA works, you can read this <a href="http://sorana.academicdirect.ro/pages/collagen/amino_acids/materials/PCA_1.pdf">document</a> or <a href="http://www.nature.com/nbt/journal/v26/n3/full/nbt0308-303.html">this article on Nature Biotechnology</a>. If you want to determine how much each of your original variables contribute to the components, you can either look at the loadings of the pca (I can't install scikit right now, but it may be inside the pca object), or you can calculate the correlation of each variable with each of the components, like described <a href="https://onlinecourses.science.psu.edu/stat505/node/54">here</a>. This should give you an idea of the "relevant" features, the features that contribute the most to divide the data into two. Another tool to do PCA with python is <a href="http://orange.biolab.si/docs/latest/widgets/rst/unsupervized/PCA/#pca">orange</a>, which gives you both a graphical interface (if you want to use it), and a python library.
biostars
{"uid": 98959, "view_count": 4694, "vote_count": 1}
<p>I am analyzing microarray data generated using Illumina Human HT 12 chips, and there were multiple batches as the samples were analyzed. The data I have has been through the 'standard' genome studio normalization steps, but has not been adjusted for any batch effects.</p> <p>In analyses testing an outcome of interested against the expression values it is common to 'adjust' (include as an independent variable) for the batch effects using a factor variable.</p> <p>I have also seen elsewhere that analysts may adjust for the relative log expression (RLE) mean to account for technical bias. RLE means are more commonly used to assess the batch effects using boxplots - I can see from boxplots in my data the a couple of the batches have significantly higher RLE means, bot not all. </p> <p>My question is which method most accurately accounts for the technical variability introduced by the batches?</p> <p>My feeling is that using the RLE mean values is best because, not only is this a linear variable, but it is actually based on the data! The batches may not necessarily have affected the expression, but to include them as covariates anyway must introduce some noise to the model. Whereas including the RLE mean values as a covariate, which are based exclusively on the expression data itself, will only account for the observed technical variation. Is this rationale logical? Have I overlooked anything? Many thanks.</p>
<p>This article compared several methods of removing batch effects: <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0017238">http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0017238</a></p> <p>By their metrics, ComBat ( <a href="http://www.bu.edu/jlab/wp-assets/ComBat/Abstract.html">http://www.bu.edu/jlab/wp-assets/ComBat/Abstract.html</a> ) performed the best.</p> <p>ComBat is available in the R/Bioconductor package SVA: <a href="http://www.bioconductor.org/packages/release/bioc/html/sva.html">http://www.bioconductor.org/packages/release/bioc/html/sva.html</a></p> <p>The advantage of using these methods over simpler approaches is that (some of them) they can remove batch effects while "protecting" your model of interest. </p>
biostars
{"uid": 55551, "view_count": 6383, "vote_count": 2}
I have to generate a "toy dataset" with one human chromosome and short reads mapping on these contigs. Which one would you pick? I would go for chromosome 21 since: 1) it is short, so less data 2) it has no gender bias (depth) like X/Y chromosome. Is this a good pick or would you advise to go for another one? Are there pre-processed datasets anywhere with contigs and corresponding reads filtered for just the specific chromosome?
Depends. What sort of data? What's the goal? Chromosome 21 is pretty sparse, gene-wise, even taking its size into account. Chr19 is usually my go to, as it's still quite short while having a decent gene density. Really depends on what your goal is. Avoiding X/Y is probably a good idea, otherwise anything after chromosome 13 are all pretty small.
biostars
{"uid": 419381, "view_count": 469, "vote_count": 1}
Hi everyone Can you please help me to extract SAM file from SRA? I took the dataset from here http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSM1208162 Downloaded SRA file. Then did sra-dump on this (as it is said on the page of dataset that reads are already aligned). But as a result I got very small file (~50 Mb). When I tried to convert it to sorted BAM file I got: ``` [bam_header_read] EOF marker is absent. The input is probably truncated. [sam_header_line_parse] expected '@XY', got [@HD VN:1.3] Hint: The header tags must be tab-separated. [samopen] no @SQ lines in the header. ``` I tried to look at summary information of this SRA file here: http://trace.ncbi.nlm.nih.gov/Traces/sra/?run=SRR951914 And didn't see any information about alignment Tried to look at alignment information by command: vdb-dump ./SRR951915.sra | grep "ALIGNMENT_COUNT" got an error ``` vdb-dump.2.1.7 int: data bad version while constructing page map within virtual database module - VCursorCellData( col:PANEL at row #1074 ) failed fastq-dump command led to an error: data bad version while constructing page map within virtual database module - failed SRR951914.sra ``` Any idea?
Hi, yesterday I got a good output from fastq-dump. Your command line is wrong. To run fastq-dump correctly you should know whether your reads are single or paired. In your case, we have single reads, then the command line will be: ./fastq-dump --split-spot SRR951914.sra For paired reads: ./fastq-dump --split-files *file*.sra
biostars
{"uid": 137228, "view_count": 7305, "vote_count": 2}
Hello BioStars community, New to perl and programming in general, so I thought I might try out my luck asking a question here. I am trying to match a fairly conserved protein sequence to a proteome using a regex. I am able to output the matching lines, as well as their positions, but I cannot find a way to output the accession numbers along with lines that match my conserved protein. Here's part of my code: my $proteins; open( file, "Athaliana_167_protein.fa" ) or die "can't open file!"; while (<file>){ if (/W[S]TRRKIAI/) {print} } Would using lookahead/lookbehinds possibly work to print out the match line and accession number? Thanks!
This statement is important for parsing fasta efficiently in perl (unless you want to use BioPerl): local($/) = "\n>"; It allows you to read a complete fasta record instead of each line. { local($/) = "\n>"; # read each fasta record, always use local! while (my $fastarec = <FASTA>) { chomp $fastarec; my ($defline, @seq) = split "\n", $fastarec; #seq id is the first line $defline =~ s/^\>//; # remove left over >, just in case my $seq = join "", @seq; # put together the sequence again $seq =~ s/\s//g; # remove potential left-over spaces, empty lines etc. if $seq =~ /W[H|A]TEVER/ { print (join "\n", ">$defline", @seq), "\n"; # output sequence in original formatting } } }
biostars
{"uid": 100823, "view_count": 3194, "vote_count": 1}
I was wondering if anyone knew of new software that can help find novel genes and isoforms from my RNA-seq data? I know there Cufflinks and what not but that is a bit old. I was wondering if anyone knew of anything newer? A lot of the genes in the genome that I am working with are predicted. I was interested to see if the RNA data matched up with the gene prediction?
[StringTie][1] is the sucessor of Cufflinks. There are plenty of relatively recent programs, for example, [KNIFE][2], [BIISQ][3], or [SparseIso][4]. P.S.: see a comprehensive list at [Omicstools][5]. [1]: https://ccb.jhu.edu/software/stringtie/ [2]: https://github.com/lindaszabo/KNIFE [3]: https://github.com/bee-hive/BIISQ [4]: https://github.com/henryxushi/SparseIso [5]: https://omictools.com/novel-transcript-quantification-category
biostars
{"uid": 301457, "view_count": 2610, "vote_count": 1}
I have a VCF file where in the INFO header I see `Number=1` and `Number=0`. What does that mean? That one is printed and not the other? ##contig=<ID=scaffold49,length=3478500> ##ALT=<ID=*,Description="Represents allele(s) other t... ##INFO=<ID=INDEL,Number=0,Type=Flag,Description="Indicates that... ##INFO=<ID=IDV,Number=1,Type=Integer,Description="Maximum number... ##INFO=<ID=IMF,Number=1,Type=Float,Description="Maximum fraction ... ##INFO=<ID=DP,Number=1,Type=Integer,Description="Raw read depth"> ##INFO=<ID=VDB,Number=1,Type=Float,Description="Variant Distance Bias for ... ##INFO=<ID=RPB,Number=1,Type=Float,Description="Mann-Whitney U test ... ##INFO=<ID=MQB,Number=1,Type=Float,Description="Mann-Whitney U test ... ##INFO=<ID=BQB,Number=1,Type=Float,Description="Mann-Whitney U test ... ##INFO=<ID=MQSB,Number=1,Type=Float,Description="Mann-Whitney U test ... ##INFO=<ID=SGB,Number=1,Type=Float,Description="Segregation based metric."> ##INFO=<ID=MQ0F,Number=1,Type=Float,Description="Fraction of MQ0 reads (smaller is better)"> ... Why in the INFO column, I can't see INDEL, IDV, IMF and that it starts at DP? #CHROM POS ID REF ALT QUAL FILTER INFO ... scaffold440 420 . T G 999 . DP=175;VDB=4.12831e-08;SGB=14.3961;RPB=0.362442;MQB=0.00113257;MQSB=6.33575e-07;BQB=0.21253;MQ0F=0;AF1=0.07 scaffold440 451 . C A 999 . DP=174;VDB=2.7432e-08;SGB=4.37813;RPB=0.00386186;MQB=0.259306;MQSB=1.3575e-06;BQB=0.237116;MQ0F=0;AF1=0.102
The VCF format is kind of all over the place. The fields are generally not required. Rather, those lines tell you what the field means, if it happens to be present. "Number=1,Type=Integer" indicates that if that tag is present it will be followed by a single integer value (e.g. "DP=175"). "Number=0,Type=Flag" means it won't be followed by anything (e.g. "INDEL") because the presence of the tag itself indicates some kind of information (like in this case that the variant call is an indel, in case you are unable to determine that yourself by noticing that the length of the ref and alt are different). Edit: I would be remiss to not mention the [official VCF spec][1] in this post. [1]: https://samtools.github.io/hts-specs/VCFv4.3.pdf
biostars
{"uid": 231648, "view_count": 3165, "vote_count": 1}
Dear all, I have indexed the C. elegans reference genome with: bwa index output/genome/ref/seq/celegans.fa and then aligned my de novo assembly to the reference with: bwa mem -t 8 -x intractg output/genome/ref/seq/celegans.fa input/assembly/celegans/hgap/bristol/assembly.fa > output/alignment/pacbio/bwa/ref/bristolAssembly.sam I then samtools view -bS output/alignment/pacbio/bwa/ref/bristolAssembly.sam > output/alignment/pacbio/bwa/ref/bristolAssembly.bam samtools sort output/alignment/pacbio/bwa/ref/bristolAssembly.bam -o output/alignment/pacbio/bwa/ref/bristolAssemblySorted.bam samtools index output/alignment/pacbio/bwa/ref/bristolAssemblySorted.bam to visualise the alignment in IGV. This is what I get: [http://cristian-riccio.ch/wp-content/uploads/2017/07/igv_snapshot.png][1] You can see that my assembly is the same as the reference, just shifted by 12 bases or so. Has anyone any suggestions about how to solve this problem? Thanks. Best, C. [1]: http://cristian-riccio.ch/wp-content/uploads/2017/07/igv_snapshot.png
Solved finally! [http://cristian-riccio.ch/wp-content/uploads/2017/07/igv_snapshotSolution.png][1] I used this command instead, penalising gap extensions less. bwa mem -t 8 -E0.5 -x intractg output/genome/ref/seq/celegans.fa input/assembly/celegans/hgap/bristol/assembly.fa > output/alignment/pacbio/bwa/ref/bristolAssembly.sam The assembly comes from pacbio, and this technology is 12% indel error in the raw reads, that might be why the contigs also need a bit more lee-ways on indels for alignment. I figured it out with a colleague because we looked at the start of the chromosomes and it started off aligned and got misaligned when there was a larger indel in my assembly. Thanks to all for the suggestions. Best, C. [1]: http://cristian-riccio.ch/wp-content/uploads/2017/07/igv_snapshotSolution.png
biostars
{"uid": 260542, "view_count": 3092, "vote_count": 1}
I found some tips for speeding up `read.table()` [here][1], I wondered if anyone can suggest something for [read.fasta()][2], which is part of the seqinr library. Not that they're really comparable. I have a ~6GB file. [1]: http://www.biostat.jhsph.edu/~rpeng/docs/R-large-tables.html [2]: http://www.inside-r.org/packages/cran/seqinr/docs/read.fasta
Since that function is written in R I don't think optimizing your code will result in much of a speed up compared to using a library written in another language. If you want to work in R, it would probably be easier to use the `system()` command to transform your sequences with another program or just use command line as Pierre suggested. If you have a specific task I'm sure people can provide a solution.
biostars
{"uid": 143485, "view_count": 8389, "vote_count": 1}
<p>This dataset is for a specific disease-gene-test results. The dataset goes like this. </p> <p>| Test | Gene | Relevance | Values |</p> <p>Test and Gene are the two parameters on x and y axis. Values are the combined results for the parameters pair. The problem for me here is that I have one more parameter called Relevance which represents the relevance of the test-gene pair and it is boolean (only two values YES/NO).</p> <ul> <li>The dataset (Relevance) should be differentiated in the map with different colour (like red and green).</li> <li>Gradient of that colour represents numerical values between that interaction.</li> </ul> <p>The end result I was aiming at was, Test on x-axis and Gene on y-axis and the map for these interaction will be only with two colours (representing Relevance values) and the gradient of that colour representing Values. Is this possible to achieve this kind of Heatmap, if Yes how can I achieve. If not is there any other option to display such kind of data (similar to heatmap)</p> <p>Help appreciated !!</p> <p>Thanks,</p> <p>RDS</p> <p>something like this - <img src='http://www.mathworks.com/matlabcentral/fx_files/24253/2/heatmap.png' alt='something like this' /></p>
<p>Here is some R code that may help. I must admit I do not understand exactly how your <code>Relevance</code> and <code>Value</code> variables are related (or expected to interact). I have made some guesses. If I have guessed wrong, perhaps you can post some sample data to help clear up my confusion?</p> <p>I have used <code>scale_fill_gradient2</code> in these examples. You can specify three different colors: high, mid (default is white), low. You can specify the midpoint (value that maps to mid color, default=0), and upper and lower value limits. This may provide enough flexibility to show your data the way you want it.</p> <p>In Example 1, negative values range from red to white, and positive values range from white to blue. In Example 2, where <code>relevance</code> is FALSE, no color is plotted and where <code>relevance</code> is TRUE, the color gradient spans the range red-white-blue.</p> <pre><code>library(ggplot2) # Create test data. dat1 = data.frame(x=factor(rep(c("A", "B", "C"), 3)), y=factor(rep(c(37, 8.7, -17.7), c(3, 3, 3))), z=c(34, 18, 31, 9, -2, 4, -21, -33, -13)) p1 = ggplot(dat1, aes(x=x, y=y, fill=z)) + theme_bw() + geom_tile() + geom_text(aes(label=paste(z))) + scale_fill_gradient2(midpoint=0, low="#B2182B", high="#2166AC") + opts(title="Example 1") ggsave(plot=p1, filename="plot_1.png", height=4.5, width=5) </code></pre> <p><img src="http://dl.dropbox.com/u/15656938/plot_1_20121105.png" alt="http://dl.dropbox.com/u/15656938/plot&lt;em&gt;1&lt;/em&gt;20121105.png"/></p> <pre><code># Create a slightly different test dataset. dat2 = data.frame( gene=factor(rep(c("Gene_A", "Gene_B", "Gene_C"), 3)), test=factor(rep(c("Test_1", "Test_2", "Test_3"), c(3, 3, 3))), relevance=c(TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, TRUE), value=c(-16, 3, NA, NA, -13, 25, -4, NA, -26)) p2 = ggplot(dat2, aes(x=gene, y=test, fill=value)) + theme_bw() + geom_tile() + geom_text(aes(label=paste(value))) + scale_fill_gradient2(midpoint=0, low="#B2182B", high="#2166AC") + opts(title="Example 2") ggsave(plot=p2, filename="plot_2.png", height=4.5, width=5) </code></pre> <p><img src="http://dl.dropbox.com/u/15656938/plot_2_20121105.png" alt="http://dl.dropbox.com/u/15656938/plot&lt;em&gt;2&lt;/em&gt;20121105.png"/></p>
biostars
{"uid": 56291, "view_count": 17291, "vote_count": 2}
There are two VCF files that I like to merge them, using GATK or VCFtools. The problem is, they have different chromosomal notation, one has Chr, the other does not. This question could be similar to [this one](/p/13462/) Is there any quick awk/sed commands that you suggest ?! Also I appreciate if you make comment, which of these two (GATK/VCFtools) is more reliable for this task.
awk '{gsub(/^chr/,""); print}' your.vcf > no_chr.vcf Should get rid of the chr awk '{if($0 !~ /^#/) print "chr"$0; else print $0}' no_chr.vcf > with_chr.vcf Will add the chr to the VCF without chr. You can use either command depending on how the chromosomes are named in your reference. Regarding preference of tools, if you plan to do downstream processing with GATK, I'd suggest sticking with GATK CombineVariants for consistency.
biostars
{"uid": 98582, "view_count": 52799, "vote_count": 19}
Hello! I am quite new to bioinformatic so I hope my question will be clear enough. I am trying to run a DESeq2 analysis on 25 bovine tumor samples. Among them I have two technical replicates of my unique control (I know is not ideal) and most of my "treated" samples have one technical replicate too. Before any DESeq analysis I had to drop a few samples because the quality of the RNA-seq was not good enough. ```r design = ~Group ``` Overview of `colData` ``` row.names sample Group sample1 sample1 treated sample2 sample1 treated sample3 sample2 control sample4 sample2 control ``` I tried two different approaches: Either start the DESeq analysis without specifying that I had technical replicates (`dds`)or using the `collapseReplicates` function based on the `colData` sample column to merge the reads (`ddsCollapsed`). ```r dds <- DESeqDataSetFromMatrix(matrix, colData, design) ddsCollapsed<- collapseReplicates(dds, groupby= colData(dds)$sample, renameCols=T) ``` My problem lies in the DESeq analysis: ```r DESeq(dds) estimating size factors estimating dispersions gene-wise dispersion estimates mean-dispersion relationship final dispersion estimates fitting model and testing -- replacing outliers and refitting for 6787 genes -- DESeq argument 'minReplicatesForReplace' = 7 -- original counts are preserved in counts(dds) estimating dispersions fitting model and testing DESeq(ddsCollapsed) estimating size factors estimating dispersions gene-wise dispersion estimates mean-dispersion relationship final dispersion estimates fitting model and testing -- replacing outliers and refitting for 24224 genes -- DESeq argument 'minReplicatesForReplace' = 7 -- original counts are preserved in counts(dds) estimating dispersions fitting model and testing ``` I am working with bovine ENSEMBL annotation which contains ~24660 entries... I was really surprised by the number of outliers. Moreover, the MA plots from those two analysis are really not great (I join to this post the one of `ddsCollapsed`): [![MA plot DESeq][1]][1] I have already red the supplementary data about Cook's distance. **So my questions are the following**: 1. Do I have to worry about such high number of outliers? Is it common? What could be the reasons leading to those numbers? 2. If yes to (1), what can I do to overcome this trouble? 3. A unrelated question: Is it possible to put missing values (NA) in the colData table? I tried and got this error: ```r Error in t(hatmatrix %*% t(y)) : "error in evaluating the argument 'x' in selecting a method for function 't': Error in hatmatrix %*% t(y) : non-conformable arguments" ``` Thanks for reading this long post! Any advice would be appreciated! Vincent [1]: http://postimg.org/image/jxnn4mhev/
The count outlier flagging is useful when there are a minority of outliers in the dataset, but as you have noted, something else is going on here with so many genes flagged. There are two reasons for so many genes being flagged as outlier: either the method for flagging outliers is not appropriate for the distribution of counts in your data and should be turned off (by setting `minReplicatesForReplace=Inf` and `cooksCutoff=FALSE`), or you have a sample which is a count outlier in almost every gene (which could be found using `plotPCA` as in the vignette). My recommendation if you don't find an obvious outlier sample which is contributing to most of these filtered genes, then turn off the filtering and inspect the top genes using `plotCounts`. No, you can't include NA in the columns which are used for modeling. We need complete covariate information.
biostars
{"uid": 149031, "view_count": 6832, "vote_count": 3}
Duplicate reads have first been removed using picard: java -jar -Xmx3g picard/dist/picard.jar MarkDuplicates INPUT=input.bam OUTPUT=output.bam METRICS_FILE=output.dup_metrics CREATE_INDEX=TRUE VALIDATION_STRINGENCY=SILENT When I run samtools flagstat on the output bamfile I get the following: 2182812 + 0 in total (QC-passed reads + QC-failed reads) 226710 + 0 duplicates 2176925 + 0 mapped (99.73%:-nan%) 2182812 + 0 paired in sequencing 1091406 + 0 read1 1091406 + 0 read2 2156992 + 0 properly paired (98.82%:-nan%) 2171322 + 0 with itself and mate mapped 5603 + 0 singletons (0.26%:-nan%) 9776 + 0 with mate mapped to a different chr 7030 + 0 with mate mapped to a different chr (mapQ>=5) So presumably the file still contains 226710 duplicate reads. If I filter out duplicates according to this table: Flag Chr Description 0x0001 p the read is paired in sequencing 0x0002 P the read is mapped in a proper pair 0x0004 u the query sequence itself is unmapped 0x0008 U the mate is unmapped 0x0010 r strand of the query (1 for reverse) 0x0020 R strand of the mate 0x0040 1 the read is the first read in a pair 0x0080 2 the read is the second read in a pair 0x0100 s the alignment is not primary 0x0200 f the read fails platform/vendor quality checks 0x0400 d the read is either a PCR or an optical duplicate using the command: samtools view -F 400 output.bam | wc -l I get 506072, not: total reads (2182812) - duplicates (226710) = 1956102 My question is why does samtools flagstat indicate that there are still duplicates present after running picard tools, and why are these figures inconsistent when I attempt to filter out duplicates using samtools? I've been asked to remove duplicates for a project. At the moment I am very confused as to which method I should use given the inconsistencies in results.
>samtools view -F 400 output.bam To filter out duplicates use `-F 1024` or `-F 0x0400`, not 400. See also https://broadinstitute.github.io/picard/explain-flags.html
biostars
{"uid": 208897, "view_count": 10981, "vote_count": 2}
Hi, I have a list of genes from a Pseudomonas strain with official gene identifiers and I have been trying to find a resource in order to sort this genes according to their metabolic pathways. Also if possible I'd like to also represent these results in some sort of graph like a circular one or a barplot but this is secondary. This strain is not supported by databases like KEGG but it is included in the NCBI. I have tried R packages like clusterProfiler and web servers like DAVID 6.8 and I have not been able to make it work out so far. Any tips/advice would be greatly appreciated. Cheers PS: I have recently started learning R and UNIX so I'm not very proficient.
There are indeed a lot of tools and resources to choose from. Three important considerations are (1) methods, (2) update frequency, and (3) coverage. 1. METHODS. The methods for pathways classification you are asking about are generally refered to as functional enrichment analyses and include over-representation analysis (ORA), gene set enrichment analysis (GSEA), and topological analysis. Each resource implements one or more of these methods usually with a bit of customized pre or post filtering. ClusterProfiler, for example, supports both ORA and GSEA. There are review papers that compare these methods, but as there aren't really any gold standard benchmarks, many folks just try more than one method. In any case, it's important to know which method you are using with a give tool and its caveats. 2. UPDATES. In terms of sources of pathway annotations: KEGG has not been significantly updated since around 2011 (https://www.kegg.jp/kegg/docs/upd_map.html), BioCyc is updated around 3 times a year (https://biocyc.org/release-notes.shtml), and WikiPathways is significantly updated every month (http://releases.wikipathways.org)! So, this is something to definitely take into account when choosing a resource to plug into a given method. In terms of tools: DAVID hasn't been updated since 2016 (https://david.ncifcrf.gov/gene2gene.jsp), which means their GO and pathway annotations will all be outdated. ClusterProfiler is under active development (https://bioconductor.org/packages/release/bioc/html/clusterProfiler.html) and let's you supply your own GMT file from WikiPathways or wherever to perform ORA or GSEA. 3. COVERAGE. If you're studying human or mouse models, then pretty much any resource will do. But you are working with a Pseudomonas strain. This will dramatically limit your options. In general, I would suggest BioCyc for baterial species coverage. It looks like they currently support two different strains for Pseudomonas. You might consider translating your genes to E.coli identifiers to perhaps expand your options to other resources.
biostars
{"uid": 468345, "view_count": 703, "vote_count": 1}
<p>I would like to use the Gviz package but my I have a reference genome in FASTA format, not as a valid UCSC genome (it was not published yet).<br /> <br /> It is possible to use this FASTA file instead?<br /> <br /> Let&acute;s say that I have one genome in FASTA format on disk:</p> <pre> library(Biostrings) ncrna &lt;- readDNAStringSet(file = &quot;GTgenome.fa&quot;) head(ncrna) A DNAStringSet instance of length 6 width seq names [1] 20202851 CCCAGTTTTCCCCACTCTGTGA...AAAGATCTTACAACCGATTTT chr10 Major... [2] 20315886 AGCCGACGAGACTCACAGAACC...TCACAAACCCCCTCGGGAGGG chr11 Major... [3] 20466350 GATTAGACCTCCGAAAGGGGTA...ATTATTAATTATTAAATATTA chr12 Major... [4] 16480340 GTCTCCACTTGCCCCACAACGG...AGATGACGATGATGAAGATGA chr13 Major... [5] 16193477 CTCTGTGACATCACAGCCATGG...GGGTTACACACGTTGTTTTTT chr14 Major...</pre> <p>Using the object created to replace a UCSC genome:</p> <pre> ideoTrack &lt;- IdeogramTrack(genome=ncrna, chromosome=&quot;chr10&quot;, fontsize=14) Error in .Call2(&quot;new_XStringSet_from_CHARACTER&quot;, ans_class, ans_elementType, : key 51 (char &#39;3&#39;) not in lookup table In addition: Warning message: In if (!token %in% base::ls(env)) { : the condition has length &gt; 1 and only the first element will be used</pre> <p>I am probably misunderstanding a very basic feature here, but I would be grateful for some help!</p>
I am not sure if I fully understood the problem, but what you could try is to replace an UCSC Ideogram with your own data. This solution is certainly not the best way, but I'm not that familiar with the Gviz package. Build a track from, for example the human genome chr22. Subsequently, replace the necessary slots (range, chromosome bands and names) with your own data. library(Gviz) ideoTrack <- IdeogramTrack(genome = "hg38",chromosome = "chr22") plotTracks(ideoTrack,from = 1,to = 1000000) # set your own my_ideoTrack <- ideoTrack # RANGE START=seq(1,100,10) END=seq(10,100,10) RANGES <- GRanges(seqnames = c(letters[1:10]),IRanges(start =START ,end =END)) # arbitrary Giemsa staining GS <- c("gneg", "gpos100", "gpos25","acen", "gpos50", "gpos75", "gvar", "stalk" ,NA,NA) bt <- data.frame(chrom="own1",chromStart= seq(1,100,10),chromEnd=seq(10,100,10),name=letters[1:10],gieStain=GS) # chromosome name CHR <- "own1" # Replace with your own data my_ideoTrack@range <- RANGES my_ideoTrack@chromosome <- CHR my_ideoTrack@name <- CHR my_ideoTrack@bandTable <- bt # set UCSC chromosome Names False options(ucscChromosomeNames=FALSE) gx <- GenomeAxisTrack() # Plot plotTracks(list(my_ideoTrack,gx),from = 1,to = 15)
biostars
{"uid": 167075, "view_count": 3615, "vote_count": 1}
Hi : I have implemented several functions where some function behaved very similar way and share some identical structure. However, I intend to reuse the code more efficiently in my wrapper functions to make function body smaller for the sake of easy to test and debug. I am trying to find better way to construct my wrapper function as small as possible. How can I efficiently reuse the code multiple times in wrapper function easily ? What's the strategy to efficiently use same code structure in many times? Can anyone give me possible idea to overcome this issue ? Any idea ? * Note: among the param list, `obj.List` is list of peak interval as GRanges object, `intList` is list of integer list as overlap position index, `val.List` is list of pvalue, `threshold` could be numeric scalar. This is the wrapper function where two small sub functions' code is shared same pattern : myFunc <- function(obj.List, intList, threshold, ...) { # func.1 <- function() { keepIdx <- lapply(intList, function(ele_) { keepMe <- sapply(val.List, function(x) x<=threshold) res <- ele_[keepMe] }) expand.Keep <- Map(unlist, mapply(extractList, obj.List, keepIdx)) return(expand.Keep) } func.2 <- function() { dropIdx <- lapply(intList, function(ele_) { drop_ <- sapply(val.List, function(x) x > threshold) res <- ele_[drop_] }) expand.drop <- Map(unlist, mapply(extractList, obj.List, keepIdx)) return(expand.drop) } # then use the output of func.1 and func.2 as an argument for another function # in this wrapper # how can I make this wrapper more efficient ? How to resue the code ? } I am just trying to make function body of this wrapper smaller, while I can call func.1, func.2 as an argument in the wrapper to trigger another sub function. How can I make this happen ? Any way efficiently optimize the code structure of above wrapper function ? Can anyone give me possible idea to make above code more efficient ? Thanks a lot
I am not completely sure I understand what you are trying to do. Maybe an example with your desired use case and expected output would help. For more advance R programming I would recommend reading [Advanced R][1], by [Hadley Wickham][2] (half-way to finish it myself). My guess is that you want something similar to [functionals][3], or perhaps, a [closure][4], but not completely sure. ### Edit: You mention that one of your objectives is to obtain *code that it is easy to test and debug*. I also see from your repository that you intend to create an R package. One thing I highly recommend you to do for testing is use [unit testing][5]. There are many options in R, like those provided by the packages [RUnit][6], [testit][7] and [testthat][8]. I now use testthat because I found it easy to implement. You can know more about how to use it in the book (written by Hadley Wickham) [R packages][9], which has a section dedicated to [unit testing][10]. [1]: http://adv-r.had.co.nz [2]: http://hadley.nz [3]: http://adv-r.had.co.nz/Functionals.html [4]: http://adv-r.had.co.nz/Functional-programming.html#closures [5]: https://en.wikipedia.org/wiki/Unit_testing [6]: https://cran.r-project.org/package=RUnit [7]: https://cran.r-project.org/package=testit [8]: https://cran.r-project.org/package=testthat [9]: http://r-pkgs.had.co.nz [10]: http://r-pkgs.had.co.nz/tests.html
biostars
{"uid": 222469, "view_count": 2016, "vote_count": 1}
I have analyzed RNA-seq data with DESeq2 and am trying to plot a 3D PCA using rgl-plot3d. I was trying to output PC1, PC2, and PC3 and then plot them. However, I realized that I get different results for PC1 (and PC2) when I try plotPCA (used with DESeq2) and prcomp. What is the bug on my code? dds <- DESeqDataSetFromHTSeqCount( sampleTable = sampleTable, directory = directory, design= ~group) rld <- rlog(dds, blind=TRUE) **From DESeq2:** data <- plotPCA(rld, intgroup=c("treatment", "sex"), returnData=TRUE ) data$PC1 > [1] -1.9169863 -2.0420236 -1.9979900 -1.8891056 0.9242008 1.0638140 >[7] 0.6911183 1.0551864 0.9598643 -1.5947907 -1.5666862 -1.6694684 >[13] -1.2523658 -1.0785239 1.3005578 2.2913536 2.5381586 2.4287372 >[19] 1.7549495 **Using prcomp** mat <- assay(rld) pca<-prcomp(t(mat)) pca <- as.data.frame(pca$x) pca$PC1 >[1] -1.29133735 -2.96001734 -3.08855648 -3.51855030 -0.68814370 -0.01753268 >[7] -2.31119461 -0.10533404 -1.45742308 -1.30239486 -1.36344946 -1.93761580 >[13] 6.04484324 4.83113873 0.75050886 -0.14905189 2.70759465 3.43851631 >[19] 2.41799979
Before calling `prcomp()` internally (<a href="https://github.com/mikelove/DESeq2/blob/master/R/plots.R#L226">HERE</a>), the DESeq2 function pre-filters your data based on variance. There is a parameter that can be modified to indirectly disable this behaviour.. Kevin
biostars
{"uid": 416573, "view_count": 3896, "vote_count": 2}
Hey, I am actually very new to this and I have few genomes to analyse. I am looking through many methods, could anyone please suggest me any good method (reliable but relatively easy to use) for identifying repetitive DNA/ transposons?
If you have large sequences to analyse, you can download the RepeatMasker app and it will be able to identify the different classes of transposons and repeat elements. RepeatMasker requires a proprietary sequence database called repbase, which can be restrictive. If you want to avoid repbase/repeatmasker there are now some alternatives like DFAM. Links below. https://www.repeatmasker.org/RepeatMasker/ https://github.com/HullUni-bioinformatics/TE-search-tools https://dfam.org/home https://bioinformatics.stackexchange.com/questions/343/are-there-any-repbase-alternatives-for-genome-wide-repeat-element-annotations
biostars
{"uid": 9531057, "view_count": 396, "vote_count": 3}
Hi everybody, I have some CNV data that need to be turned into a chromosomal plot. The format is like this: chr start end ploidy loss/gain chr1 86000000 117150000 1 loss chr2 70250000 70500000 3 gain chr2 203050000 204650000 3 gain (The last column can probably be omitted as we have all the info we need in the ploidy column). I would like to visualize them on a chromosomal plot. The only tool I've managed to make work with my data is CNANorm (http://www.bioconductor.org/packages/devel/bioc/vignettes/CNAnorm/inst/doc/CNAnorm.pdf), but it insists on analyzing my already analyzed data, so I can't get the results I need. Here's my CNANorm output, although the data points are all wrong because of the extra analysis. ![enter image description here][1] Any help would be appreciated, please keep in mind that I don't have that much R experience, and that I only need a viewer for the time being. Thanks everyone :) [1]: https://dl.dropboxusercontent.com/u/13728657/TEMP/Public%20-%20Don%27t%20overwrite/CNV%20for%20PR.png
You can plot your data like this. CNVs are plotted as segments with a point corresponding to their midpoint. Chromosomes are separated out into facets. library("ggplot2") chr <- c("chr1", "chr2", "chr2") start <- c(86000000, 70250000, 203050000) end <- c(117150000, 70500000, 204650000) center <- start + (end - start)/2 ploidy <- c(1, 3, 3) ploidy_df <- data.frame(chr, start, end, center, ploidy) ggplot(ploidy_df, aes(x=center, y=ploidy)) + geom_point() + geom_segment(aes(x=start, y=ploidy, xend=end, yend=ploidy, colour="segment")) + geom_hline(y=2, linetype=2) + facet_wrap(~chr) + xlab("Position") + theme_bw() ![enter image description here][1] [1]: https://i.imgsafe.org/39e3a1b.jpeg
biostars
{"uid": 179503, "view_count": 5186, "vote_count": 1}
I usually use `bedtools intersect` to find overlapping regions of bed files, but it seems like this tool can only output overlap between a pair of files. I need something that can do the following from many bed files and **only** report regions contained in all of the bed files. Example input from 4 separate bed files: ``` chr1 50 100 chr1 60 120 chr1 30 90 chr1 50 90 ``` Desired output: ``` chr1 60 90 ``` Any tools for this? Maybe I should just `cat` all the bed files together and merge them?
https://www.biostars.org/p/172566/ (`bedops` option)<br> <br> There is also bedtools multiinter Tool: bedtools multiinter (aka multiIntersectBed) Version: v2.26.0 Summary: Identifies common intervals among multiple BED/GFF/VCF files. Usage: bedtools multiinter [OPTIONS] -i FILE1 FILE2 .. FILEn Requires that each interval file is sorted by chrom/start.
biostars
{"uid": 323450, "view_count": 5064, "vote_count": 3}
I have a matrix (or data frame, Excel spreadsheet). It is populated with alphanumeric identifiers (ATG numbers if you are familiar with plants; ex. At1G45623). The first column has only 1 occurrence of each identifier (no duplicates). Each row, after the first column, has a variable number of these alphanumeric identifiers. Anywhere from 2 to 350 in each row. A simplified version is below: z a b c d e f g h i j y a b c z w h n q i j w j i h a f d c b g e p x i j a b c d h e f m k l o u p s t v So, each row is like an array where the name of the array is the value in the first column. I would like to cluster these 'arrays' so that row z (array z) and row w cluster together and not row z with row y. Disregarding the first column, rows z and w have the same group of letters (except for 1 letter), but in a very different sequence. Rows z and y have similar sequences, but somewhat different letters. The method I am searching for would cluster rows z and w together before z and y. The clustering techniques I am familiar with all take the sequence of the values into account. I am looking for a method that would disregard the sequence and just consider the contents of the row. This is further complicated by the fact that the rows have very variable numbers of values, and that is why I have included row x just to remember that the size of each 'array' is variable. Does anyone know of anything in Perl, Python or R that could help ?
Compute the Jaccard index (or any other suitable measure of similarity over sets, see the [sets][1] package in R) and use a clustering algorithm. [1]: https://cran.r-project.org/web/packages/sets/index.html
biostars
{"uid": 198072, "view_count": 1968, "vote_count": 1}
Because of splicing, one variant may affect different transcripts and the effect can be different from one transcript to another. In the case where there are multiple transcripts associated to one gene, Annovar will output the information of all the transcripts. However for the SIFT prediction (or any other SNP annotation tool - Polyphen2, Mutation Taster..), even if there are different transcripts involved by the variant, there's only one score/prediction. For example: C6orf25:NM_138273:exon3:c.C432G:p.F144L,C6orf25:NM_025260:exon4:c.C523G:p.R175G,C6orf25:NM_138272:exon4:c.C523G:p.R175G,C6orf25:NM_138277:exon4:c.C523G:p.R175G Here there are 4 different transcripts listed, however the only SIFT prediction is `0.34/T`. So my question is the following: how do I know which transcript was taken into consideration for the prediction? If there's only one prediction, does that mean that it applies to all the transcript? I am trying to understand how I can interpret the SIFT prediction in that case and any help in that regard is welcome.
On the Annovar website, it's stated that > ... it can have multiple scores such as "1.0;1.0;1.0;1.0;0.993;1.0;1.0;1.0;0.999" and multiple predictions such as "D;D;D;D;D;D;D;D;D", probably due to multiple transcriptional isoforms. In this case, only the largest score (1 is the largest score among multiple isoforms), as well as its associated D/P/B annotation, will be used in ANNOVAR database." http://www.openbioinformatics.org/annovar/annovar_filter.html#ljb23 I believe the score that ANNOVAR reports is the most deleterious score out of all the transcripts. If you want scores that are transcript specific, or to figure out which transcript that score belongs to, I suggest either going back to dbNSFP, or trying out VEP or SnpEff. Hope that helps
biostars
{"uid": 132498, "view_count": 4359, "vote_count": 3}
Hi everyone! I'm quite new to this genomics stuff, so sorry if this question seems silly (or just cheeky). I have looked for similar questions, but haven't found any (but I'm terrible at searching, though). So. I have Illumina reads for a E.coli strain that has GFP inserted in it. I know roughly where the GFP should be, and I have the reference genome for the unmodified E.coli strain. I want to assemble a new reference for my modified strain. From what I've seen so far, the idea would be to do a de novo assembly, then some scaffolding and mapping to the reference I have, using something like Mauve. But as my sequences map almost perfectly to the reference, and the only important breakpoint is the GFP insertion, can't I use this information to improve my assembly? If so, how? Thanks! Pablo
It's called reference guided assembly. Many assemblers can do it..
biostars
{"uid": 181310, "view_count": 1484, "vote_count": 1}
<p>I was wondering if there is any easy way to do an identity cutoff (e.g. Sequences down to 95% Id) from the blast output locally .I had a look at blastall usage options but haven't seen any flag (or parameter) to do it. There is e-value cutoff option but not %id or score. I have a perl script to do this cut off in the blast output but the script is huge almost 50-60 lines so trying to avoid that and want to use an easy script if any of you got any. </p>
<p>I think, I can do it this way: once I got the normal blast output then I can filter it.</p> <pre><code>awk '{OFS="\t"; if($%id_column_number&gt;desired cutoff%)print$column_numbers_I_want_with_commas}' blast_output.file &gt;&gt; output </code></pre> <p>For example: </p> <pre><code>awk '{OFS="\t"; if($3&gt;95%)print$1,$4,$5,$8}' blast_output.file &gt;&gt; output </code></pre>
biostars
{"uid": 68693, "view_count": 4805, "vote_count": 1}
Hi, I would like to use the "clump" function for my association analysis results which use 10000 Genomes imputed data (and I have many files), so I guess I need to use the 10000 Genomes genotypes as a reference file, but I am having trouble with this. I have downloaded the vcf files from the MACH website [here][1]. Then I thought I could use Plink 1.9 version that reads vcf and use directly the "clump" command here, but apparently there are issues with the vcf file for allele names (I would like to keep as many variants as possible), so I followed [this link][2] to convert from vcf to plink. But I get a segfault error as soon as I type to this command: bcftools annotate -Ob -x ID chr21.test.vcf which says: ``` [W::vcf_parse] INFO 'NS' is not defined in the header, assuming Type=String Encountered error, cannot proceed. Please check the error output above. ``` I have tried to use `bcftools reheader -h` but it doesn't work, can someone please help? Thank you!! [1]: http://www.sph.umich.edu/csg/abecasis/MACH/download/1000G.2012-03-14.html [2]: http://apol1.blogspot.co.uk/2014/11/best-practice-for-converting-vcf-files.html
If you don't need to deal with split multi-allelic variants, Plink 1.9's `--set-missing-var-ids` flag should do a good enough job of assigning names. (You'll need a build from November 25th or later.)
biostars
{"uid": 123299, "view_count": 4442, "vote_count": 1}
Dear all, talking about the pathogenicity predictors on cancer mutations, what algorithms or meta-predictors would you recommend to use ? Among possible choices : CADD, MutationTaster, FATHMM, CHASM, Condel CanDrA , or any other predictors/meta-predictors. thank you, bogdan
Take your pick... This is not a complete listing, as there are many more. #Missense predictions - <a href="http://sift.jcvi.org/">SIFT</a> - <a href="http://agvgd.iarc.fr/">Align-GVGD</a> - <a href="http://mutationassessor.org/r3/">Mutation Assessor</a> - <a href="http://www.pantherdb.org/">PANTHER</a> - <a href="http://mendel.stanford.edu/SidowLab/downloads/MAPP/index.html">MAPP</a> - <a href="http://mupro.proteomics.ics.uci.edu/">MUpro</a> - <a href="http://www.mutationtaster.org/">MutationTasster</a> - <a href="http://folding.biofold.org/i-mutant/i-mutant2.0.html">I-Mutant</a> - <a href="http://decrypthon.igbmc.fr/kd4v">KD4v</a> - <a href="http://fathmm.biocompute.org.uk/">FATHMM</a> - <a href="http://sites.google.com/site/jpopgen/dbNSFP">dbNSFP</a> - <a href="http://genetics.bwh.harvard.edu/pph2/">PolyPhen-2</a> - <a href="http://ls-snp.icm.jhu.edu/ls-snp-pdb/">LS-SNP/PDB</a> - <a href="http://snpeffect.switchlab.org/">SNPeffect 4.0</a> - <a href="http://mmb.irbbarcelona.org/PMut/">PMut</a> - <a href="https://www.rostlab.org/services/SNAP/">SNAP</a> - <a href="http://snps.biofold.org/phd-snp/phd-snp.html">PhD-SNP</a> - <a href="http://snps.biofold.org/snps-and-go/snps-and-go.html">SNPs&GO</a> - <a href="http://www.mobioinfor.cn/parepro/contact.htm">Parepro</a> - <a href="http://research-public.gene.com/">CanPredict</a> - <a href="http://snpanalyzer.uthsc.edu/">nsSNPAnalyzer</a> - <a href="http://mutpred.mutdb.org/">MutPred2</a> - <a href="http://hansa.cdfd.org.in:8080/">HANSA</a> - <a href="http://www.mutationtaster.org/">MutationTaster</a> #Splice predictions - <a href="http://www.fruitfly.org/seq_tools/splice.html">NNSplice</a> - <a href="http://www.cbcb.umd.edu/software/GeneSplicer/gene_spl.shtml">GeneSplicer</a> - <a href="http://genes.mit.edu/burgelab/maxent/Xmaxentscan_scoreseq.html">MaxEntScan</a> - <a href="http://www.umd.be/HSF/">HSF</a> - <a href="http://www.cbs.dtu.dk/services/NetGene2/">NetGene2</a> #Protein modelling (from amino acid sequence) - <a href="https://www.proteinmodelportal.org/?pid=modelling_interactive">Protein Model Portal</a> *[uses various modelling algorithms and produces PDB files, which can be loaded into protein viewers like Jmol]* #Non-coding (i.e. regulatory) - <a href="http://cadd.gs.washington.edu/">CADD</a> (germline variants) - <a href="https://www.ncbi.nlm.nih.gov/pubmed/25338716">DANN</a> (germline variants) - <a href="http://fathmm.biocompute.org.uk/fathmmMKL.htm">FATHMM-MKL</a> (germline variants) - <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5015703/">GWAVA</a> (germline variants | somatic mutations) - <a href="http://funseq2.gersteinlab.org/">Funseq2</a> (somatic mutations) - <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4224693/">SurfR</a> (rare variants | complex disease variants | all other variants) # Other - <a href="http://mulinlab.tmu.edu.cn/gwas4d">GWAS3D / GWAS4D</a> ------------------------- ------------------ For further reading: - UK perspective, refer to the guidelines by the Association for Clinical Genomic Science: http://www.acgs.uk.com/quality/best-practice-guidelines/ - US perspective, refer to American College of Medical Genetics: https://www.acmg.net/docs/Standards_Guidelines_for_the_Interpretation_of_Sequence_Variants.pdf
biostars
{"uid": 286364, "view_count": 3430, "vote_count": 5}
Hi everybody, I'm trying to run this pipeline with bigwif files computeMatrix reference-point --referencePoint TSS --beforeRegionStartLength 30 --afterRegionStartLength 300 -R transcribed_genes_sort.txt -S BJ_rep1_neg_strand.bw BJ_rep1_pos_strand.bw BJ_rep2_neg_strand.bw BJ_rep2_pos_strand.bw -bl DACblacklist.bed.gz --skipZeros -p 1 -o matrix_TSS.gz and I get this error [bwHdrRead] There was an error while reading in the header! Traceback (most recent call last): File "/home/projas/.local/bin/computeMatrix", line 11, in <module> main(args) File "/home/projas/.local/lib/python3.5/site-packages/deeptools/computeMatrix.py", line 414, in main hm.computeMatrix(scores_file_list, args.regionsFileName, parameters, blackListFileName=args.blackListFileName, verbose=args.verbose, allArgs=args) File "/home/projas/.local/lib/python3.5/site-packages/deeptools/heatmapper.py", line 235, in computeMatrix chromSizes, _ = getScorePerBigWigBin.getChromSizes(score_file_list) File "/home/projas/.local/lib/python3.5/site-packages/deeptools/getScorePerBigWigBin.py", line 161, in getChromSizes fh = pyBigWig.open(fname) RuntimeError: Received an error during file opening! However, the header of the bigwig files is something like: chr1 13264 13314 1.8 chr1 13444 13454 3 chr1 13454 13464 4.6 chr1 13464 13474 8.6 Any ideas? Thanks!
For those wondering, the cause of the problem was that the input files were actually bedGraph (text files) rather than bigWigs (binary format). Presumably there was something wrong with the conversion command with UCSC tools.
biostars
{"uid": 360452, "view_count": 4461, "vote_count": 1}
Is there a way to generate a heatmap of a complex of transcription factors (in more detail, I am trying to generate a heatmap of the 7SK snRNP which is composed of Hexim, Larp7, and Cdk9) at the TSS. I can easily generate individual heatmaps for each of the separate TF's themselves using ngsplot or deepTools, but I was wondering if there was a way to combine all three into a single heatmap? I also have some experience with R and Bioconductor so any packages can also be submitted as answers. Thanks in advance
<p>Just use the &quot;release-1.6&quot; branch of deepTools, you can then include multiple samples in a heatmap (or profile plot).</p>
biostars
{"uid": 170840, "view_count": 2383, "vote_count": 1}
Morning Gang, I feel like I am just missing something in the documentation for this but here goes, I'm currently running some salmonellas through the works using a very well annotated reference and we have come across a large drop out of genes in one location. We went back and confirmed they are not there with a quick pcr so I know the data is correct, however when I go to make the consensus chromosome I can not figure out how to include the drop out in the file. For the mapping I'm using TMAP (ion torrent data, single end reads), freebayes for the haploid vcf creation, and SnpEff for the SNP annotation. I know if I want to create a new chromosome of our sample I can use the vcf file to make the SNP changes in the reference file, however that will retain the genes that have dropped out because I'm making the changes to the reference not the alignment. Does anyone have a suggestion or a work around for this type of problem? Thank you in advance, Sean ***EDIT 15MAR2016*** So I have been working on this for some time now and I'm close to my solution but I'm having an issue the "new" sequence. Below is my code. ## this makes a dict of the samtools depth coverage input file of all the 0's coverageDict = {} ## this loops over the input depth information and appends the dictionary with ## a key,value as genome position, depth of coverage coverage = open("/home/sbrimer/Desktop/Columbia ST/1819_1/coverage.txt","r") for line in coverage: coverages = line.split("\t") coverageDict[coverages[1]] = coverages[2] coverage.close() ## this makes a set of the large dict of just the missing. missing = {index for index,value in coverageDict.items() if value == 0} def filter_coverage(index,value): return 'N' if index in missing else value ref = open("Salmonella enterica subsp.fasta","rU") ## this should read in the sequence file as an object, change it to a mutable ## object, then anywhere the coverageDict value = 0. Replace that nt with "N" for seq_record in SeqIO.parse(ref,"fasta"): refSeq = seq_record.seq new_seq = "".join(filter_coverage(index,value) for index,value in enumerate(refSeq.tomutable(), start=1)) new_rec = SeqRecord(Seq(new_seq),id="new_id",description="new_description") SeqIO.write(new_rec,"NewSeq.fasta","fasta") ref.close() Everything seems to work fine until I want to add the 'N' to the fasta file, I can get the generic "new_id" and "new_description" to show up in the test file, but not the newseq. I have also tried the following: first this refSeq = seq_reocrd.seq refSeq_mut = refSeq.tomutable() then refSeq = str(seq_record.seq) but for whatever reason I can not seem to change the seq_record.seq part of the record. Has anyone else had this issue?
Hi All, It turns out the error in this scrip is right at the top. When I make the dictionary the index and value are encode as strings and not numbers so when all the other stuff runs, it runs without error and returns the original sequence because number do not match words. Seriously. That took hours for me to figure out. Lesson learned. Please find the corrected script below if for some reason anyone else needs to do something like this. All that is left to do with it is ti set up the argparser for the options I want so it will work with any file I pass. from Bio.SeqRecord import SeqRecord from Bio.Alphabet import IUPAC from Bio.Seq import Seq from Bio import SeqIO import argparse ## this makes a dict of the samtools depth coverage input file of all the 0's coverageDict = {} ## first the empty dictionary ## this loops over the input depth information and appends the dictionary with ## a key,value as genome position, depth of coverage coverage = open("/home/sbrimer/Desktop/Columbia ST/1819_1/coverage.txt","r") for line in coverage: coverages = line.strip().split('\t') #added strip, values also had newline character coverageDict[int(coverages[1])] = int(coverages[2]) coverage.close() ## making a set of only the index of the missing missing = {index for index,value in coverageDict.items() if value == 0} def filter_coverage(index,value): if index in missing: return "N" else: return value ## this should read in the sequence file as an oblect, change it to a mutable ## object, then anywhere the coverageDict value = 0. Replace that nt with "N" append_handle = open("NewFile.fasta","a") for seq_record in SeqIO.parse("Salmonella enterica subsp.fasta","fasta"): newSeq= "".join(filter_coverage(index,value)for index,value in enumerate(seq_record.seq, start= 1)) SeqIO.write(SeqRecord(Seq(newSeq),id="something",description="something_else"),append_handle,"fasta") append_handle.close()
biostars
{"uid": 179755, "view_count": 3918, "vote_count": 2}
I have a **normalized data frame for a timecourse experiment** (MS/MS). Samples are named as ***Genotype_Time_Replicate*** (e.g. `AOX_1h_4`). Each sample has 4 replicates for each time point. (>1000 Genes/AGIs) Sample data df <- structure(list(AGI = c("ATCG01240", "ATCG01310", "ATMG00070"), aox2_0h__1 = c(15.79105291, 14.82652303, 14.70630068), aox2_0h__2 = c(16.06494674, 14.50610036, 14.52189807), aox2_0h__3 = c(14.64596287, 14.73266459, 13.07143141), aox2_0h__4 = c(15.71713641, 15.15430026, 16.32190068 ), aox2_12h__1 = c(14.99030606, 15.08046949, 15.8317372), aox2_12h__2 = c(15.15569857, 14.98996474, 14.64862254), aox2_12h__3 = c(15.12144791, 14.90111092, 14.59618842), aox2_12h__4 = c(14.25648197, 15.09832061, 14.64442686), aox2_24h__1 = c(15.23997241, 14.80968391, 14.22573239 ), aox2_24h__2 = c(15.57551513, 14.94861669, 15.18808897), aox2_24h__3 = c(15.04928714, 14.83758685, 13.06948037), aox2_24h__4 = c(14.79035385, 14.93873234, 14.70402827), aox5_0h__1 = c(15.8245918, 14.9351844, 14.67678306), aox5_0h__2 = c(15.75108628, 14.85867002, 14.45704948 ), aox5_0h__3 = c(14.36545859, 14.79296855, 14.82177912), aox5_0h__4 = c(14.80626019, 13.43330964, 16.33482718), aox5_12h__1 = c(14.66327372, 15.22571466, 16.17761867), aox5_12h__2 = c(14.58089039, 14.98545497, 14.4331578), aox5_12h__3 = c(14.58091828, 14.86139511, 15.83898617 ), aox5_12h__4 = c(14.48097297, 15.1420725, 13.39369381), aox5_24h__1 = c(15.41855602, 14.9890092, 13.92629626), aox5_24h__2 = c(15.78386057, 15.19372889, 14.63254456), aox5_24h__3 = c(15.55321382, 14.82013321, 15.74324956), aox5_24h__4 = c(14.53085803, 15.12196994, 14.81028556 ), WT_0h__1 = c(14.0535031, 12.45484834, 14.89102226), WT_0h__2 = c(13.64720361, 15.07144643, 14.99836235), WT_0h__3 = c(14.28295759, 13.75283646, 14.98220861), WT_0h__4 = c(14.79637443, 15.1108037, 15.21711524 ), WT_12h__1 = c(15.05711898, 13.33689777, 14.81064042), WT_12h__2 = c(14.83846779, 13.62497318, 14.76356308), WT_12h__3 = c(14.77215863, 14.72814995, 13.0835214), WT_12h__4 = c(14.70685445, 14.98527337, 16.12727292), WT_24h__1 = c(15.43813077, 14.56918572, 14.92146565 ), WT_24h__2 = c(16.05986898, 14.70583866, 15.64566505), WT_24h__3 = c(14.87721853, 13.22461859, 16.34119942), WT_24h__4 = c(14.92822133, 14.74382383, 12.79146694)), class = "data.frame", row.names = c(NA, -3L)) How can I plot volcano plots for different comparisons; i.e. ***'AOX2_0h__vs_WT_0h_', 'AOX2_12h__vs_WT_12h_', 'AOX2_24h__vs_WT_24h_', 'AOX5_0h__vs_WT_0h_', 'AOX5_12h__vs_WT_12h_', 'AOX5_24h__vs_WT_24h_', 'AOX2_0h__vs_AOX5_0h_', 'AOX2_12h__vs_AOX5_12h_', 'AOX2_24h__vs_AOX5_24h_'*** Also at the end, can I get a summary table of the statistics. Could anyone help with this, please! P.S: I did this in `DEP package`, but after I manually worked on the data frame (in Excel; after extract by `df_import <- as.data.frame(assays(df)` it's no more in (`SummarizedExperiment`) format to work with those functions.
A Volcano Plot just requires p-values and fold changes. From where did you get your data? - it looks to be already normalised. Which program did you use? Here is the data, for everyone else: df AGI aox2_0h__1 aox2_0h__2 aox2_0h__3 aox2_0h__4 aox2_12h__1 aox2_12h__2 1 ATCG01240 15.79105 16.06495 14.64596 15.71714 14.99031 15.15570 2 ATCG01310 14.82652 14.50610 14.73266 15.15430 15.08047 14.98996 3 ATMG00070 14.70630 14.52190 13.07143 16.32190 15.83174 14.64862 aox2_12h__3 aox2_12h__4 aox2_24h__1 aox2_24h__2 aox2_24h__3 aox2_24h__4 1 15.12145 14.25648 15.23997 15.57552 15.04929 14.79035 2 14.90111 15.09832 14.80968 14.94862 14.83759 14.93873 3 14.59619 14.64443 14.22573 15.18809 13.06948 14.70403 aox5_0h__1 aox5_0h__2 aox5_0h__3 aox5_0h__4 aox5_12h__1 aox5_12h__2 1 15.82459 15.75109 14.36546 14.80626 14.66327 14.58089 2 14.93518 14.85867 14.79297 13.43331 15.22571 14.98545 3 14.67678 14.45705 14.82178 16.33483 16.17762 14.43316 aox5_12h__3 aox5_12h__4 aox5_24h__1 aox5_24h__2 aox5_24h__3 aox5_24h__4 1 14.58092 14.48097 15.41856 15.78386 15.55321 14.53086 2 14.86140 15.14207 14.98901 15.19373 14.82013 15.12197 3 15.83899 13.39369 13.92630 14.63254 15.74325 14.81029 WT_0h__1 WT_0h__2 WT_0h__3 WT_0h__4 WT_12h__1 WT_12h__2 WT_12h__3 WT_12h__4 1 14.05350 13.64720 14.28296 14.79637 15.05712 14.83847 14.77216 14.70685 2 12.45485 15.07145 13.75284 15.11080 13.33690 13.62497 14.72815 14.98527 3 14.89102 14.99836 14.98221 15.21712 14.81064 14.76356 13.08352 16.12727 WT_24h__1 WT_24h__2 WT_24h__3 WT_24h__4 1 15.43813 16.05987 14.87722 14.92822 2 14.56919 14.70584 13.22462 14.74382 3 14.92147 15.64567 16.34120 12.79147
biostars
{"uid": 369558, "view_count": 1026, "vote_count": 1}
I have RNA-seq data for 20 samples with 2 condition and 2 sex (male, female : control, treatment). I am very new to RNA-seq analysis and am trying to find the DEGs using DESeq2. Since I want to have the normalization to be calculated based on all samples, I get the rld based on all and then I will use contrast to find the DEG for 1. tratment vs. control male 2. tratment vs. control female. For the QC, the PCA plot separates the male and female but the control and treatment is not separated very well on the PC2 for male C1 (Figure 1)![enter image description here][1] Then I decided plot PCA only for male samples. again PC1 does not separate the control and treatment male samples but PC2 is kinda separating them. (Figure2) ![enter image description here][2] My questions are: 1. What numbers on the PCA plot (x and y axis) decide the separation? I read in a [website][3] that samples that are at PC1>0 are outlier. is that true? 2. Can I just look at figure 1 and remove male c1 and continue DEG analysis with 9 samples? or I should definately plot figure 2 ? 3. If I need to consider Figure 2, can I rely on only the PC2 which separates control and treatment and continue the DEG analysis or I should remove C1, tr4, tr5 samples and then work on DEG analysis based on remaining 7 samples? [1]: https://i.imgur.com/UgC0byK.png [2]: https://i.imgur.com/mpnghro.png [3]: https://www.huber.embl.de/users/klaus/Teaching/DESeq2Predoc2014.html#pca-and-sample-heatmaps
Hey, Regarding the website to which you are referring, that was written by Wolfgang Huber. He is *not* saying, in his tutorial, that PC1>0 is a global definition of an outlier from a PCA bi-plot. It is just in *his* example that he has decided to select 2 outlier samples via a vertical cut-off line drawn at PC1>0. In another experiment, the cut-off for outliers may be at PC1 < -500, or PC1 > 42. The units on these axes are, well, essentially unitless. Looking at your first plot, you cannot really remove male sample C1 just because it groups away from the main group of samples. You should first investigate *why* it may be grouping in the way that it is [grouping]. Is it a sample mix-up?; is it a male with Klinefelter Syndrome?; is the clinical information for this sample incorrect? For your third question, you need to consider the fact that `sex` represents a substantial amount of variation in your dataset, i.e., it is greater than the very condition of interest that you are studying - was this expected?; did you include `sex` in your design formula as a covariate? If you include it as a covariate in your design formula, normalise your data again, and then run rld function with `blind=FALSE`, do you see the same PCA bi-plot? You can also segregate your dataset based on `male` | `female` and then cross-compare results in a sort of meta-analysis, if you wish. `sex` is a clear and major source of variation among your samples. If you proceed with this route (sample segregation based on `sex`), then I see no reason to exclude c1. Kevin
biostars
{"uid": 400268, "view_count": 2220, "vote_count": 2}
Can EnhancedVolcano label genes that aren't significant or meet fold change cut off? Can we select? I tried to but it comes up as a geneid on the x-axis with vertical bar. Just wondering. 4.8 seems to suggest no. How does EnhancedVolcano determine which genes to label? Is there a way to label more to the ones selected automatically? I guess following 4.8, you can selectLab but they have to be present already in lab. What happens if you selectLab genes that interfere with ones automatically selected by EnhancedVolcano, do they get bumped or removed? Can we remove select labels we don't want? It seems no matter how genes are labeled, it picks the same ones to label. The default is Log2FC=2, so means minimum 4 fold change? or total 2, so 2 fold change, log2 = ratio 1? Should I use lfcShrink(res) to get volcano plot or is just res fine? The vignette seems to suggest lfcShrink but pdf manual example has just res. Not sure what the proper or standard way for volcano plots from RNA-seq data run through DESeq2 is and how people normally publish them. This package does look great. I'm pretty beginner at this and this is pretty user-friendly with nice vignette.
I wrote *EnhancedVolcano* - thanks for your comments. You mentioned section 4.8: <a href="https://bioconductor.org/packages/release/bioc/vignettes/EnhancedVolcano/inst/doc/EnhancedVolcano.html#only-label-key-transcripts">4.8 Only label key transcripts</a>. This should also label genes that do not pass your thresholds. However, the points for these non-statistically significant genes is usually dense, at the bottom of the plot; so, labeling all of them in a clear way is almost always impossible. If you want to favour the labeling of certain genes, then you can (I believe) re-order your results data-frame to have these genes appear in the first few rows of your data-frame. You can also supply custom labeling and colouring via <a href="https://bioconductor.org/packages/release/bioc/vignettes/EnhancedVolcano/inst/doc/EnhancedVolcano.html#override-colouring-scheme-with-custom-key-value-pairs">4.11 Override colouring scheme with custom key-value pairs</a>: <a href="https://ibb.co/eNGZEf"><img src="https://preview.ibb.co/ejhkLL/download.png" alt="download" border="0"></a> For the fold-changes, you can technically supply any numbers as the value of `x`. These can be log2 fold changes, linear fold changes, etc. The corresponding cut-off, as `FCcutoff`, is not assuming any distribution. If you supply linear fold changes to `x` and have `FCcutoff=2`, then the cut-off is linear FC = 2. If you supply log2 fold-changes as x and have `FCcutoff=2`, then this cut-off relate to log2FC = 2 `lfcshrink()` can be used when you have initially used `betaPrior = FALSE` during the `DESeq()` function. `lfcshrink()` was a recent implementation into DESeq2, so, I am not sure of the exact situations where it should (or should not) be used. Kevin
biostars
{"uid": 349958, "view_count": 3782, "vote_count": 1}
I am trying to compile shapeit4 on my cluster without root permissions. The makefile is as follows: #COMPILER MODE C++11 CXX=g++ -std=c++11 #HTSLIB LIBRARY [SPECIFY YOUR OWN PATHS] HTSLIB_INC=/share/apps/genomics/htslib-1.10/include/ HTSLIB_LIB=/share/apps/genomics/htslib-1.10/lib/libhts.a #BOOST IOSTREAM & PROGRAM_OPTION LIBRARIES [SPECIFY YOUR OWN PATHS] BOOST_INC=/share/apps/boost-1.71.0/include/ BOOST_LIB_IO=/share/apps/boost-1.71.0/lib/libboost_iostreams.a BOOST_LIB_PO=/share/apps/boost-1.71.0/lib/libboost_program_options.a #HTSLIB LIBRARY [SPECIFY YOUR OWN PATHS] #HTSLIB_INC=/software/UHTS/Analysis/samtools/1.4/include #HTSLIB_LIB=/software/UHTS/Analysis/samtools/1.4/lib64/libhts.a #BOOST IOSTREAM & PROGRAM_OPTION LIBRARIES [SPECIFY YOUR OWN PATHS] #BOOST_INC=/software/include #BOOST_LIB_IO=/software/lib64/libboost_iostreams.a #BOOST_LIB_PO=/software/lib64/libboost_program_options.a #COMPILER & LINKER FLAGS #Best performance is achieved with this. Use it if running on the same plateform you're compiling, it's definitely worth it! #CXXFLAG=-O3 -march=native #Good performance and portable on most intel CPUs CXXFLAG=-O3 -mavx2 -mfma #Portable version without avx2 (much slower) #CXXFLAG=-O3 LDFLAG=-O3 #DYNAMIC LIBRARIES DYN_LIBS=-lz -lbz2 -lm -lpthread -llzma -lcurl -lcrypto #SHAPEIT SOURCES & BINARY BFILE=bin/shapeit4 HFILE=$(shell find src -name *.h) CFILE=$(shell find src -name *.cpp) OFILE=$(shell for file in `find src -name *.cpp`; do echo obj/$$(basename $$file .cpp).o; done) VPATH=$(shell for file in `find src -name *.cpp`; do echo $$(dirname $$file); done) #COMPILATION RULES all: $(BFILE) $(BFILE): $(OFILE) $(CXX) $(LDFLAG) $^ $(HTSLIB_LIB) $(BOOST_LIB_IO) $(BOOST_LIB_PO) -o $@ $(DYN_LIBS) obj/%.o: %.cpp $(HFILE) $(CXX) $(CXXFLAG) -c $< -o $@ -Isrc -I$(HTSLIB_INC) -I$(BOOST_INC) clean: rm -f obj/*.o $(BFILE) The compilation then fails on: /usr/bin/ld: cannot find -lcurl I understand that this means the compiler has looked in /usr/bin/ld and hasn't been able to find the lcurl library. Curl is installed on the cluster: ├── bin │   ├── curl │   └── curl-config ├── include │   └── curl ├── lib │   ├── libcurl.a │   ├── libcurl.la │   ├── libcurl.so -> libcurl.so.4.5.0 │   ├── libcurl.so.4 -> libcurl.so.4.5.0 │   ├── libcurl.so.4.5.0 │   └── pkgconfig └── share ├── aclocal └── man I've tried to add several different lines to the makefile in order for it to find -lcurl, but I haven't succeeded so far. Can anyone advise me on what to add to the makefile?
Install it with BioConda https://bioconda.github.io/recipes/shapeit4/README.html
biostars
{"uid": 490619, "view_count": 1098, "vote_count": 1}
Hi all, I ran partitioned heritability analysis using LDSC and custom annotations and got negative heritability enrichment scores for 3 phenotypes out of 34. We were a bit confused about the negative heritability results and then I was advised to filter my summary statistics to keep only the SNPs which are present in the baseline model which I used to run partitioned heritability. My first question is, how do you think keeping only the SNPs in my baseline would affect the heritability enrichments? I did a bit of googleing and found out that usually low heritability, sampling errors and low sample size drives negative heritability. How do you think filtering SNPs in my sumstats based on the baseline would change the heritability measure? Secondly, a technical question: How can I filter my sumstats based on the baseline? Is there a function to use from one of the common tools or should I manually keep the SNPs that fall within genomic intervals in my baseline bed files? Thanks!
For the record - I found out that my issue was too short annotations. I found out that your annotations must encompass at least 1% of the whole genome (1% of all SNPs in the 1KG phase 3 reference panel is around ~20k SNPs) in an old post at the LDSC google group. However, based on my experience, you need at least around 90k-100k SNPs in your annotation for LDSC to work as intended and to minimize negative heritability estimates. Hope this is helpful for others!
biostars
{"uid": 491690, "view_count": 1107, "vote_count": 1}
Hi everyone, I am trying to plot genomic regions using KaryoplotR. I have been following the tutorial page, I converted my data frame into Granges object, however the tool does not work with my data. My data frame is simply a list of genomic location ['chr', 'start', 'end'] The following script doesn't give any warning or error message and only plots the chromosomes but not the genomic regions: library(karyoploteR) library(GenomicRanges) library(data.table) setwd("/path/to/dataframe") df <- fread("for_ideogram.csv", header = T) gr <- makeGRangesFromDataFrame(df, start.field="start", end.field= "end") kp <- plotKaryotype(genome = "hg19", chromosomes="autosomal", plot.type = 1) kpPlotRegions(kp, data=gr) Does anyone know how to get this to work? Thanks
Hi @gabi, When karyoploteR creates the ideogram but nothing gets added to it with the other plotting functions, in many cases it is due to different chromosome names. Are your gr chromosome names in UCSC style ("chr1", "chr2", ...) or in Ensembl/NCBI style ("1", "2", ...)? The default bioconductor chromosome names are UCSC style and these are the ones used by karyoploteR. Please check if adding `seqlevelsStyle(gr) <- "UCSC"` before plotting solves the problem. library(karyoploteR) library(GenomicRanges) library(data.table) setwd("/path/to/dataframe") df <- fread("for_ideogram.csv", header = T) gr <- makeGRangesFromDataFrame(df, start.field="start", end.field= "end") seqlevelsStyle(gr) <- "UCSC" kp <- plotKaryotype(genome = "hg19", chromosomes="autosomal", plot.type = 1) kpPlotRegions(kp, data=gr) and I don't know the structure of your csv file, but if the first 3 columns contain the chr, the start and the end positions (like a bed file and with any names) you can use [`toGRanges`](https://rdrr.io/bioc/regioneR/man/toGRanges.html) simplify you code to this (toGRanges will change the chromosome names to the ones used by "hg19" in this case): library(karyoploteR) gr <- toGRanges("path/to/dataframe/for_ideogram.csv", genome="hg19") kp <- plotKaryotype(genome = "hg19", chromosomes="autosomal", plot.type = 1) kpPlotRegions(kp, data=gr) If this does not work, I'd need to see the first couple of lines for you csv file to understand what's going on.
biostars
{"uid": 445275, "view_count": 1247, "vote_count": 1}
Hi all, I have some RNAseq data and I would like to take the results forward to build a network based on expression correlation (wgcna or similar). I have several steps that I need to carry out: - normalise data - transform data - subset data (not everything in the dataset is useful for the intended outcome so I need to extract only the useful samples) - remove a known batch effect (I know I can model this for differential expression, but for building a network I think I need to remove it - correct me if I am wrong) I am using DESeq2 for normalisation and transforming using the variance stabilising transform from the same package (as recommend in the wgcna manual). I have noticed that the outputs of my exploratory analyses change depending on the order in which I carry out these steps, particularly PCAs. In most cases, the gross patterns in the data remain intact, but in some cases this is not true. My question is, what is the correct order in which to carry out these steps and why? Currently, I am loading all of the available samples, normalising, transforming the normalised counts, removing the batch effect from the transformed data, then extracting the samples of interest. General discussion about the order in which these processes should be carried out is welcome, but specifically: 1. Would it be more sensible to extract the samples I am interested in first, and then run the downstream steps only on the samples I am interested in? I imagine this would affect the output as the geometric mean across the samples would change. 2. I am currently removing the batch effect from the vst-transformed data. Would I be better to remove the batch effect first from the normalised counts and then transform the batch-corrected data? Thanks for your help
biostars
{"uid": 320862, "view_count": 1828, "vote_count": 2}
Hi all, I have **several BAM files which I would like to work in R environment**. This are big files with **more than 100M reads** each. Is there any way to convert them in GRanges object and then save it as RData compressed file? So far what I've tried is the following R code: library(GenomicRanges) library(Rsamtools) args = commandArgs(TRUE) filename <- args[1] name <- args[2] ## Create GRanges object from the BAM file and save it to disk param <- ScanBamParam(what=c("qname","flag")) b <- readGAlignments(filename, format="BAM", param=param); save(b,file=paste(name,".RData",sep=""), compression_level=9); This code gave me error messages because R is not able to allocate vectors more than 800 ~ 1000 Mb size, which I understand, but is there **any way to create GRanges object of large BAM files**? Thanks for your help!!
<p>It works, you just need a lot of memory and coffee. I have tested with a 21GB BAM file containing ~400Million aligned reads. The process grows to ~70GB in size and the loading takes several hours. Even though it works in principle, it might be a good idea to a basic summarization outside of R, e.g. read counting or coverage calculation. To save a little space you could maybe ommit the read names. Another possibility might be to split the input by chromosome, but then again, maybe R is not the the optimal software for this volume.</p>
biostars
{"uid": 97790, "view_count": 5022, "vote_count": 2}
I want to save my fastq files as CRAM files to save space. However, I do want to be able to get the reads back that I have in my fastq, in case I have to reallign in a different way in the future. Below I have the code that I use to do the conversions to BAM and back again. However, when I then check the number of lines in the original fastq with the converted fastq, the original is much smaller: original: 17460203 (lines) converted: 147610328 (lines) The flagstat of the BAM file gives 71038660 + 2766504 in total (QC-passed reads + QC-failed reads) reads. I don't understand why the newly made has more reads, unless the conversion from BAM to fastq also adds reads that map on multiple locations as multiple reads. Should I remove all duplicated reads and then convert? # align with STAR STAR --readFilesIn fastq1.fq fastq2.fq \ --genomeDir Gencode_human/ \ --outFileNamePrefix example. # fix mates with picard java -jar $EBROOTPICARD/FixMateInformation.jar \ INPUT=example.bam \ OUTPUT=example.fixmates.bam \ VALIDATION_STRINGENCY=LENIENT \ CREATE_INDEX=true \ SORT_ORDER=coordinate # convert to cram with scramble from io_lib scramble \ -I bam \ -O cram \ -r Gencode_human/GRCh38.p5.genome.fa \ example.fixmates.bam \ example.cram # convert back to bam scramble \ -I cram \ -O bam \ -m \ -r Gencode_human/GRCh38.p5.genome.fa \ example.cram \ example.bam #sort by name samtools sort \ -n \ -o example.sorted.bam \ example.bam #convert to fastq samtools fastq \ -@ 4 \ -1 converted.fastq1.fq \ -2 converted.fastq2.fq \ example.sorted.bam
Hello, I guess the problem is that you align your reads. I don't know how samtools fastq handles reads that map to multiple position. It could be that they will be also multiple times in the output. Instead I would suggest this workflow: **1. Create a uBam file using [FastqToSam][1]** $ picard FastqToSam F1=fastq1.fastq.gz F2=fastq2.fastq.gz O=Sample1_unaligned.bam SM=Sample1 **2. Convert to CRAM** $ samtools view -T genome.fa -C -o Sample1.cram Sample1_unaligned.bam To get your fastq files back do: $ samtools fastq -@4 -1 sample1_R1.fastq -2 sample1_R2.fastq Sample1.cram fin swimmer [1]: https://broadinstitute.github.io/picard/command-line-overview.html#FastqToSam
biostars
{"uid": 335200, "view_count": 7116, "vote_count": 4}
<p>Hi,</p> <p>I finished mapping and variant calling on these for solid data. Now I&#39;m looking to identify large rearrangements and transposable elements. Can anyone tell me any good tools for these? One tool I have come across now is <a href="https://code.google.com/p/nfuse/">nFuse</a>. I wanted to know what the general pipeline for this is.</p> <p>And I would like to view such arrangements as <code>Circos</code> plots.</p> <p>Thanks for the help.</p>
<p>Hello Jordan.</p> <p>I think <a href='http://svdetect.sourceforge.net'>SVDetect</a> can help you.</p>
biostars
{"uid": 81477, "view_count": 4452, "vote_count": 2}
Hi, I want to parse a SAM file based on QNAME, however the python script I designed to do so takes too long. To visualize my issue, here are the first 5 lines of my 4.25 GB SAM file: ``` HWI-D00372:292:C6U8JANXX:2:1310:9053:40752 99 chr1_gl000191_random 623 255 51M = 737 165 CTTTGGGAGGCTGAGGCAGGTGGATCACCTGAGGTCAGGAGTTCGAGACCA /<BBBFFFFFFFFFFFFFFFFFFBFFFFFFFBFFFFFFFFFFFFFFBFFFF XA:i:0 MD:Z:51 PG:Z:MarkDuplicates NM:i:0 HWI-D00372:292:C6U8JANXX:2:2108:14511:86999 99 chr1_gl000191_random 721 255 51M = 769 99 TTAGCTGGGGCTGGTGGCGGATGCCTGTAATCCCAGCTACTCAGGAGGCTG BBBBBFFFFFFFFFFFFFFFFFFFFFFFF<FFFFFFFFFFB<BFFFFBFFF XA:i:0 MD:Z:51 PG:Z:MarkDuplicates NM:i:0 HWI-D00372:292:C6U8JANXX:2:1310:9053:40752 147 chr1_gl000191_random 737 255 51M = 623 -165 GCGGATGCCTGTAATCCCAGCTACTCAGGAGGCTGAGGCTGGGGAATCACT FFFFFFFFFFFFFFFFFFFFFFFFBFFFFFFF/<FFFFFFFFB/FFBBBBB XA:i:0 MD:Z:51 PG:Z:MarkDuplicates NM:i:0 HWI-D00372:292:C6U8JANXX:2:2108:14511:86999 147 chr1_gl000191_random 769 255 51M = 721 -99 CTGAGGCTGGGGAATCACTTGAACCTGGAAGGCGGAGGTTGCAGTGAGCTG F<FFFFBFFFFFBB<F/FFFFF<FFBFFFFFFFFFFFFFFFBFF<FBBBBB XA:i:0 MD:Z:51 PG:Z:MarkDuplicates NM:i:0 HWI-D00372:292:C6U8JANXX:2:1313:1300:84021 163 chr1_gl000191_random 833 255 51M = 877 95 GCACTCCAGCCTACGCAACAAGAGCGAAACTCTGTCTCAAAAAATAAAAAA BBBBBFFFFFFFFFFFFFFFFFFFFFFFBFFFFFFFFFFFFFFFFFFFFFF XA:i:0 MD:Z:51 PG:Z:MarkDuplicates NM:i:0 ``` The QNAME is the `HWI-D00372:292:C6U8JANXX:2:1310:9053:40752` part of the SAM file line. I have a list of about 1 million QNAMEs that I want to iterate through, pick out each SAM file line that corresponds to each QNAME (each QNAME corresponds with 2 separate SAM file lines because this is paired-end sequencing alignment data), then write these lines out into a new SAM file. The best python script I could design to do this task is the following: ```py parsedSamData = "" # initialize parsedSamData variable # read SAM file in as a single string file_handle = open(samFile, 'r') sam_file_contents_string = file_handle.read() file_handle.close() for QNAME in QNAME_list: # iterate through each QNAME regex = re.escape(QNAME) + r".*\n" # design a regular expression to match the SAM file lines with the QNAME mate_pairs = re.finditer(regex, sam_file_contents_string) # match QNAME with the SAM file lines for entry in mate_pairs: # access the match object parsedSamData += entry.group() # append the 2 lines to the parsedSamData variable # save the parsed SAM file data fh = open(parsed_SAM_file_path, "w") fh.write(parsedSamData) fh.close() ``` This definitely does what I want, however the time it takes to go through each QNAME is about 5 seconds. This translates to my script running for about 58 days for this particular SAM file. I would like to perform this task in a few days or less, as I have multiple SAM files to perform this on that range in size from 4-12 GB. I appreciate and thank you for any help!
<p>Eek, you load the whole SAM file in to RAM as a single gigantic string, which you then regex search n times where n is the number of QNAMEs you&#39;re interested in! Thats going to need a LOT of memory and will be crayz slow! Try this:</p> <pre> mate_pairs = {} QNAME_list = set(QNAME_list) # make the checking a tiny bit faster by making the list a set with open(pathToSam, &#39;rb&#39;) as samFile: for line in samFile: QNAME = line.split(&#39;\t&#39;)[0] if QNAME in QNAME_list: try: mate_pairs[QNAME].append(line) except KeyError: mate_pairs[QNAME] = [line] with open(parsed_SAM_file_path, &quot;wb&quot;) as outfile: for read,mate in mate_pairs.values(): outfile.write(read + &#39;\n&#39; + mate + &#39;\n&#39;)</pre> <p>This will fail if you dont have exactly two reads per QNAME (which is probably a good thing..) and should take about 5-20 minutes to run.</p> <p>EDIT:</p> <p>Also, dont give up on Python just because your first foray was thwarted by a pretty sweet grep!<br /> Yes you can solve many easy problems with grep/cat/sed/awk one-liners - but I always found there are more -weird -flags in those tools to -remember, than there are programming paradigms that will never change.</p>
biostars
{"uid": 149654, "view_count": 8086, "vote_count": 1}
I've performed a nucleotide BLAST of a multi-fasta file against itself in order to find sequences which share highly similar flanking sequence, indicative of a duplication event. Given the .xml file produced from this what I need to do is determine cases where significant BLAST hits come not from the same sequence (query and subject are the exact same) but from different sequences. Essentially what I'm trying to do is parse the .xml output and get it to print details about the HSPs as well as IDs for the subject and query in each of those HSPs. I tried playing around with `f.write(header.query_id)` and `f.write(parameters.query_id)` but those just spit back an error. I suspect those need to be included in a separate for statement but right now I'm very lost. I'm going off code based on section 7.4 of the Biopython 1.65 manual: ``` from Bio.Blast import NCBIXML import sys #>python screener.py inputfile.xml outputfile.txt input_file=str(sys.argv[1]) output_file=str(sys.argv[2]) result_handle=open(input_file) blast_records=NCBIXML.parse(result_handle) #screens an XML file based on the specification below and prints those results to a file with open(output_file, "w") as f: for blast_record in blast_records: for alignment in blast_record.alignments: for hsp in alignment.hsps: if hsp.expect< 0.01: f.write('****Alignment****' + "\n") f.write('sequence:'+ str(alignment.title) +"\n") f.write('length:'+ str(alignment.length) +"\n") f.write('e value:'+ str(hsp.expect) +"\n") f.write(hsp.query[0:75] + '...'+"\n") f.write(hsp.match[0:75] + '...'+"\n") f.write(hsp.sbjct[0:75] + '...'+"\n") ``` Any suggestions would be, as always, greatly appreciated.
Ok, played around some more and solved the problem. What was needed was: f.write(str(blast_record.query)) So the full code looks like this: ``` from Bio.Blast import NCBIXML import sys input_file=str(sys.argv[1]) output_file=str(sys.argv[2]) result_handle=open(input_file) blast_records=NCBIXML.parse(result_handle) #screens an XML file based on the specification below and prints those results to a file with open(output_file, "w") as f: for blast_record in blast_records: for alignment in blast_record.alignments: for hsp in alignment.hsps: if hsp.expect< 0.01: f.write('****Alignment****' + "\n") f.write('sequence:'+ str(alignment.title) +"\n") f.write('ID:' + str(blast_record.query_id) + "\n") #also added f.write("Query Sequence: " + str(blast_record.query) + "\n") f.write('length:'+ str(alignment.length) +"\n") f.write('e value:'+ str(hsp.expect) +"\n") f.write(hsp.query[0:75] + '...'+"\n") f.write(hsp.match[0:75] + '...'+"\n") f.write(hsp.sbjct[0:75] + '...'+"\n") ```
biostars
{"uid": 139458, "view_count": 2510, "vote_count": 2}
Hello everyone, This is the first time for me to analyze scRNA data, compared to the example in the workflow (Seurat), I notice that I have a large amount of mtDNA, I was thinking to incread the percentage of mtDNA so that I don't loose informations but on the other end I don't want to have bad results, do you guys have any suggestion about it ? code: seurat_object <- subset(seurat_object, subset = nFeature_RNA > 200 & nFeature_RNA < 2500 & percent.mt < 5) my cells are leukaemia cells and here you can see the image https://imgur.com/ZNHaMCU Thank you
[scater](https://bioconductor.org/packages/release/bioc/vignettes/scater/inst/doc/overview.html#3_quality_control) has some nice QC functions that will determine which cells are outliers for QC each metric and remove them. It works on `SingleCellExperiment` objects, which are simple to convert from/to `Seurat` objects. This *feels* a bit better than just arbitrarily setting the thresholds yourself, though the results will be pretty similar. But as previously mentioned, a threshold around 10% for mitochondrial reads is pretty standard - just make sure most of your obviously dead cells are removed.
biostars
{"uid": 410115, "view_count": 3351, "vote_count": 1}
I have tried to break a problem down to two files. I want to take the most common 2nd column value for each sequence (1st column) then have the range of column 3 for those values. Also count the number of markers so the number of Fc06 for seq000001. Any idea how to do this? I was hoping for something easy on linux command line using awk but I just can't see it. The data is cM and chromosome file which I which to summarise. file1 seq000001 seq000002 file2 seq000001 Fc06 35.948 seq000001 Fc06 36.793 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc06 37.009 seq000001 Fc08 37.009 seq000001 Fc06 37.368 seq000001 Fc06 37.368 seq000001 Fc06 37.368 seq000001 Fc06 37.368 seq000001 Fc06 37.368 seq000001 Fc06 37.58 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc08 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 37.916 seq000001 Fc06 38.1 seq000001 Fc06 38.664 seq000001 Fc06 38.923 seq000001 Fc06 38.978 seq000001 Fc06 39.324 seq000001 Fc06 40.341 seq000001 Fc06 40.341 seq000001 Fc06 40.341 seq000001 Fc06 40.543 seq000001 Fc06 40.543 seq000001 Fc06 40.543 seq000001 Fc06 40.992 seq000001 Fc06 41.189 seq000001 Fc06 41.189 seq000001 Fc06 41.347 seq000001 Fc06 41.39 seq000001 Fc06 41.39 seq000001 Fc06 41.39 seq000001 Fc06 41.399 seq000001 Fc06 41.399 seq000001 Fc06 41.399 seq000001 Fc06 41.657 seq000001 Fc06 41.71 seq000001 Fc06 41.785 seq000001 Fc06 41.923 seq000001 Fc06 42.237 seq000001 Fc06 42.634 seq000001 Fc06 42.963 seq000001 Fc06 43.285 seq000001 Fc06 43.478 seq000001 Fc06 43.478 seq000001 Fc06 43.478 seq000001 Fc06 43.478 seq000001 Fc06 43.744 seq000001 Fc06 43.744 seq000001 Fc06 45.234 seq000001 Fc06 45.234 seq000001 Fc06 46.272 seq000001 Fc06 46.272 seq000001 Fc06 46.272 seq000001 Fc06 46.272 seq000001 Fc06 46.272 seq000001 Fc06 51.086 seq000002 Fc06 63.754 seq000002 Fc06 63.754 seq000002 Fc09 13.078 seq000002 Fc09 13.078 seq000002 Fc09 13.078 seq000002 Fc09 16.342 seq000002 Fc09 16.342 seq000002 Fc09 16.342 seq000002 Fc09 16.342 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.004 seq000002 Fc09 17.806 seq000002 Fc09 18.544 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc09 19.59 seq000002 Fc11 19.59 seq000002 Fc09 19.59 seq000002 Fc09 39.857 seq000002 Fc09 20.558 seq000002 Fc09 20.892 seq000002 Fc09 21.323 Output seq000001 Fc06 35.948-63.754 72 seq000002 Fc09 13.078-39.857 34
While the datamash solution is clearly the best, just for fun an attempt based on perl ` perl -MList::Util=min,max -lane 'BEGIN{%y}{$x=@F[0]."\t".$F[1];push @{$y{$x}},@F[-1] }END{foreach(keys %y){my @d=@{$y{$_}};print "$_\t",scalar(@d),"\t",min(@d),"--",max(@d) } }' inputfile.txt ` - seq000002 Fc11 1 19.59--19.59 - seq000002 Fc09 34 13.078--39.857 - seq000001 Fc08 2 37.009--37.916 - seq000001 Fc06 72 35.948--51.086 - seq000002 Fc06 2 63.754--63.754
biostars
{"uid": 319990, "view_count": 1481, "vote_count": 1}
Hi, I am using methods of analysis where it is necessary that the LD and effect sizes are computed with respect to the same reference allele, while in my case they are not since I am using an available panel to compute LD with Plink and my own data to compute effect size. To align the two measurements, I would have to compare each allele with those of the reference and flip the effect sizes in cases where the alleles are flipped, but things get messy when including both SNPs and INDELs in the comparisons. To avoid this step, I was hoping it could be possible to specify a reference allele directly when computing LD in Plink like this: plink --bfile 1000Genomes --r2 --extract list.snps --reference-allele list.reference.allele If this is not, are there any tools out there to easily compare alleles from two datasets and adjust the affect size of the study if the alleles are flipped compared to a reference file such as 1000 Genomes? I have previously used ['QCGWAS' - CRAN][1] which was really nice to do this, but the files I have now are all very large BED files that take too long to load into R, that is why if Plink could to this it would be much faster... Many thanks for any suggestions and ideas! Fra [1]: https://cran.r-project.org/web/packages/QCGWAS/QCGWAS.pdf
From Christopher Chang > Yes, `--a2-allele` lets you choose the reference alleles for the current run. Based on Christopher Chang's answer, to compute LD in Plink using a specific allele as a reference we can use: plink --bfile 1000Genomes --r2 --extract list.snps ---a2-allele list.reference.allele
biostars
{"uid": 158325, "view_count": 2474, "vote_count": 2}
I have genomic positions in a tab delimited file as follows: Chr Start End 2 10262865 11801950 2 9637403 10927601 2 25141434 27157396 2 31181368 38044662 2 127808684 129276101 2 236957807 238896574 2 237172736 238896574 **First I want to find overlapping segments within these genomic segments in the same file (edit)**. Then I wish to plot these such that the segments do not overlap on the plot. Is there a tool that can aid in:: 1 Finding overlapping regions within the same file 2. Visualising the segments in a non-overlapping manner? My attempts at plotting this in R results in the segments occupying the same position
using the tool developped for https://www.biostars.org/p/336589 http://lindenb.github.io/jvarkit/Biostar336589.html https://gist.github.com/lindenb/08f23bb31c230878d1d744ef64f8636d
biostars
{"uid": 367522, "view_count": 2906, "vote_count": 1}
Hello, I used samtools view -b -h exome.bam chrM > extractedchrM.bam to extract the mitochondrial DNA from an exome into a bam file and to include a header. I was wondering if the header below was correct considering that my new bam is only chrM, but the header has more than that. Is it because it just takes the header of the original bam file? ``` @HD VN:1.4 SO:coordinate @SQ SN:chrM LN:16571 @SQ SN:chr1 LN:249250621 @SQ SN:chr2 LN:243199373 @SQ SN:chr3 LN:198022430 @SQ SN:chr4 LN:191154276 @SQ SN:chr5 LN:180915260 @SQ SN:chr6 LN:171115067 @SQ SN:chr7 LN:159138663 @SQ SN:chr8 LN:146364022 @SQ SN:chr9 LN:141213431 @SQ SN:chr10 LN:135534747 @SQ SN:chr11 LN:135006516 @SQ SN:chr12 LN:133851895 @SQ SN:chr13 LN:115169878 @SQ SN:chr14 LN:107349540 @SQ SN:chr15 LN:102531392 @SQ SN:chr16 LN:90354753 @SQ SN:chr17 LN:81195210 @SQ SN:chr18 LN:78077248 @SQ SN:chr19 LN:59128983 @SQ SN:chr20 LN:63025520 @SQ SN:chr21 LN:48129895 @SQ SN:chr22 LN:51304566 @SQ SN:chrX LN:155270560 @SQ SN:chrY LN:59373566 @SQ SN:chr1_gl000191_random LN:106433 @SQ SN:chr1_gl000192_random LN:547496 @SQ SN:chr4_ctg9_hap1 LN:590426 @SQ SN:chr4_gl000193_random LN:189789 @SQ SN:chr4_gl000194_random LN:191469 @SQ SN:chr6_apd_hap1 LN:4622290 @SQ SN:chr6_cox_hap2 LN:4795371 @SQ SN:chr6_dbb_hap3 LN:4610396 @SQ SN:chr6_mann_hap4 LN:4683263 @SQ SN:chr6_mcf_hap5 LN:4833398 @SQ SN:chr6_qbl_hap6 LN:4611984 @SQ SN:chr6_ssto_hap7 LN:4928567 @SQ SN:chr7_gl000195_random LN:182896 @SQ SN:chr8_gl000196_random LN:38914 @SQ SN:chr8_gl000197_random LN:37175 @SQ SN:chr9_gl000198_random LN:90085 @SQ SN:chr9_gl000199_random LN:169874 @SQ SN:chr9_gl000200_random LN:187035 @SQ SN:chr9_gl000201_random LN:36148 @SQ SN:chr11_gl000202_random LN:40103 @SQ SN:chr17_ctg5_hap1 LN:1680828 @SQ SN:chr17_gl000203_random LN:37498 @SQ SN:chr17_gl000204_random LN:81310 @SQ SN:chr17_gl000205_random LN:174588 @SQ SN:chr17_gl000206_random LN:41001 @SQ SN:chr18_gl000207_random LN:4262 @SQ SN:chr19_gl000208_random LN:92689 @SQ SN:chr19_gl000209_random LN:159169 @SQ SN:chr21_gl000210_random LN:27682 @SQ SN:chrUn_gl000211 LN:166566 @SQ SN:chrUn_gl000212 LN:186858 @SQ SN:chrUn_gl000213 LN:164239 @SQ SN:chrUn_gl000214 LN:137718 @SQ SN:chrUn_gl000215 LN:172545 @SQ SN:chrUn_gl000216 LN:172294 @SQ SN:chrUn_gl000217 LN:172149 @SQ SN:chrUn_gl000218 LN:161147 @SQ SN:chrUn_gl000219 LN:179198 @SQ SN:chrUn_gl000220 LN:161802 @SQ SN:chrUn_gl000221 LN:155397 @SQ SN:chrUn_gl000222 LN:186861 @SQ SN:chrUn_gl000223 LN:180455 @SQ SN:chrUn_gl000224 LN:179693 @SQ SN:chrUn_gl000225 LN:211173 @SQ SN:chrUn_gl000226 LN:15008 @SQ SN:chrUn_gl000227 LN:128374 @SQ SN:chrUn_gl000228 LN:129120 @SQ SN:chrUn_gl000229 LN:19913 @SQ SN:chrUn_gl000230 LN:43691 @SQ SN:chrUn_gl000231 LN:27386 @SQ SN:chrUn_gl000232 LN:40652 @SQ SN:chrUn_gl000233 LN:45941 @SQ SN:chrUn_gl000234 LN:40531 @SQ SN:chrUn_gl000235 LN:34474 @SQ SN:chrUn_gl000236 LN:41934 @SQ SN:chrUn_gl000237 LN:45867 @SQ SN:chrUn_gl000238 LN:39939 @SQ SN:chrUn_gl000239 LN:33824 @SQ SN:chrUn_gl000240 LN:41933 @SQ SN:chrUn_gl000241 LN:42152 @SQ SN:chrUn_gl000242 LN:43523 @SQ SN:chrUn_gl000243 LN:43341 @SQ SN:chrUn_gl000244 LN:39929 @SQ SN:chrUn_gl000245 LN:36651 @SQ SN:chrUn_gl000246 LN:38154 @SQ SN:chrUn_gl000247 LN:36422 @SQ SN:chrUn_gl000248 LN:39786 @SQ SN:chrUn_gl000249 LN:38502 @RG ID:ID18_Father_exome_L001 PL:ILLUMINA PU:L001 LB:ID18_Father_exome SM:ID18_Father_exome ```
<p>Yes, otherwise you would have to reassign a new Reference ID to every read.</p>
biostars
{"uid": 149088, "view_count": 1887, "vote_count": 2}
I used BLAT to find the match of query B-cell receptor transcript. What does arrow+box (shown in green box) mean in UCSC genome browser? Can anyone help? [the screenshot is here][1] [1]: https://ibb.co/RSbdzG6
The arrow+box (which we refer to as double horizontal lines) represents an alignment with gaps in both the target genome and the query sequence. There are three ways the alignment can be drawn in BLAT results: - **Solid black bar**: The query matches the genome entirely in that region. - **Single line arrows**: (as seen in the screenshot as well) These represents alignments with gaps in the genome, but not the query. If you were BLAT'ing mRNA, you would expect to see all the mRNA exons as black bars and the introns as single line arrows. The arrows represent the direction of the match. - **Double horizontal lines**: Gaps in both the query and the target genome. These should occur less frequently as they represent a poorer match. There is sequence for both the query and the target genome that differs here. The length of the double horizontal line represents the length of the gap on the target genome, but to find out the query length you would have to click into the alignment itself. If you zoom into your alignment you should see some solid alignment blocks as well. Usually these alignments are at the edges, with large double gaps in between. However, when you are zoomed out (1.3M bp), it can be hard to see these alignment blocks. There is also some additional color coded information as part of the BLAT display. If you click on the track and scroll all the way to the bottom you should see it under the Description section. If you have any further questions, you may reach out to the Genome Browser help desk by emailing genome@soe.ucsc.edu.
biostars
{"uid": 399252, "view_count": 1516, "vote_count": 1}
<p>I&#39;m looking for a quick ftp-based solution to downloading FASTA canonical and splicing variants data from Uniprot:</p> <p>Go to uniprot.org, type in Homo sapiens, click Download, select the format &quot;FASTA (canonical &amp; isoform)&quot;, click Go.</p> <p>Can I automate this process for all organisms in the Uniprot database using ftp in one nice command line utility?</p>
<p>You can download precomputed reference proteomes by ftp. They include canonical and fasta sequences:</p> <p>ftp://ftp.expasy.org/databases/uniprot/current_release/knowledgebase/reference_proteomes/</p> <p>( see also http://www.uniprot.org/help/reference_proteome )</p> <p>If this is not what you want, you can indeed automate the http download, as described in the <a href="http://www.uniprot.org/help/programmatic_access">UniProt help page for programmatic access</a>, in particular <a href="http://www.uniprot.org/help/programmatic_access#retrieving_entries_via_queries">Retrieving entries via queries</a></p> <p>e.g. http://www.uniprot.org/uniprot/?query=organism:9606&amp;format=fasta&amp;include=yes&amp;compress=yes</p> <p>Please note that large http downloads are much more vulnerable and likely to fail than ftp downloads.</p> <p>Please don&#39;t hesitate to contact the UniProt helpdesk if you have any additional questions.</p>
biostars
{"uid": 140718, "view_count": 1739, "vote_count": 1}
The package upsetr was installed with conda. It **works** in my **terminal** conda activate upsetr conda deactivate I was trying to invoke conda in a **Makefile** , something like SHELL=/bin/bash test: conda activate upsetr && conda deactivate but it doesn't work: CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> Currently supported shells are: I can reproduce this error by piping the command into bash $ echo 'conda activate upsetr && conda deactivate' | bash CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> Why does it fail ?
Non-interactive shells [do not run *.bashrc*](https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html#Bash-Startup-Files). So you could e.g. add the `conda init` stuff to $BASH_ENV, or alternatively make your sub-shell interactive (with all the side-effects that implies) by piping into `… | bash -i` or in the *Makefile* use `SHELL = /bin/bash -i`. For the (GNU) *Makefile*, probably the best bet would be to avoid those side-effects via ```make SHELL = /bin/bash export BASH_ENV = $(HOME)/.conda.bash_env ``` Where *~/.conda.bash_env* is a one-liner: `eval "$(/path/to/bin/conda shell.bash hook)"` Finally, you will need to ensure each command in a Make recipe that uses `conda` is actually executed via $(SHELL) rather than spawned directly by `make` via `exec(2)`/etc. The easiest way to do that is to make sure that each of those command lines contains at least one shell meta-character, e.g., just by putting a semicolon on the end. As your `conda activate …` commands will be setting $PATH for the following commands in the recipe, this will somewhat take care of itself as you will want to ensure that both commands are run by the same shell, either using `.ONESHELL` or the usual syntax for shell script snippets: ```make analyse: conda activate upsetr; \ upsetr foo bar ```
biostars
{"uid": 9515156, "view_count": 1729, "vote_count": 1}
Hi all, I am trying to do a couple of operations in automatic. Basically I have a directory of fastq PE files from the same sample e.g.: L16-24MG-A_S14_L001_R1_001.fastq.gz L16-24MG-A_S14_L003_R1_001.fastq.gz L16-24MG-A_S14_L002_R1_001.fastq.gz L16-24MG-A_S14_L004_R1_001.fastq.gz L16-24MG-A_S14_L001_R2_001.fastq.gz L16-24MG-A_S14_L003_R2_001.fastq.gz L16-24MG-A_S14_L002_R2_001.fastq.gz L16-24MG-A_S14_L004_R2_001.fastq.gz I am trying to cat the fastq file together and then run bowtie so at the and having only one .bam file. This is my script so far but it's not quite working. In fact I am not even able to obtain a combined fastq file before bowtie. Can you help me out please? for i in $(ls *R1*.gz) do cat *R1* > ${i%.R1_combined.fastq}.gz done for i in $(ls *R2*.gz) do cat *R2* > ${i%.R2_combined.fastq}.gz done gunzip *.gz for i in $(ls *.fastq | rev | cut -c 13- | rev | uniq) do bowtie /home/casaburi/ufrc/hybrid_pacbio_global/rsem/final_assembly_cdhit100 \ -1 ${i}_R1_combined.fastq -2 ${i}_R2_combined.fastq \ --all --best --strata -m 300 --chunkmbs 512 -S -p 10 | samtools view -F 4 -S -b -o ${i}.bam done
Try: `for i in $(ls -1 L16*R1*); do cat ${i%%_R1*}_R1_001.fastq.gz >> ${i%%_S14*}_combined.fastq.gz; done`
biostars
{"uid": 247117, "view_count": 2493, "vote_count": 1}
Dear all, I have a multi-sample (>100) VCF file and a list of SNPs with CHR and POS for which the sample names and their genotypes need to be extracted. For example, i have a SNP chr1:23455. I would like to know how many and which samples have homozygous alternate allele, heterozygous allele and homozygous reference allele. Is there any existing tool that provides the above summary? Many thanks!!!
**Last update: May 9, 2021** You ***must*** normalise your VCF / BCF first; otherwise, this script will not work as expected. You can do this with: bcftools norm -m-any MyVariants.vcf -Ov > MyVariants.Norm.vcf I probably should explain what's going on here, too: It is divided into 4 parts (each part indicated by the starting `<(bcftools` on each line): 1. First `awk` command: outputs the first few columns from the VCF/BCF 2. Three `BCFtools query` commands: tabulate the number of samples having each variant type (as you requested). This will work for phased and/or un-phased variants. The output is the 3 columns named `nHet`, `nHomAlt`, `nHomRef` 3. Three `BCFtools view` commands: look through the file again, saving sample names into an array called '*header*', and then printing the indices of the array where a particular field (i.e. sample) has a particular type of variant. This is repeated for: a, heterozygous (`HetSamples`), b, homozygous (alt) (`HomSamplesAlt`), and c, homozygouse (ref) (`HomSamplesRef`). I've tested it and verified results on a handful of 1000 Genome variants. I strongly encourage you to do some rigorous testing. paste <(bcftools view MyVariants.Norm.vcf |\ awk -F"\t" 'BEGIN {print "CHR\tPOS\tID\tREF\tALT"} \ !/^#/ {print $1"\t"$2"\t"$3"\t"$4"\t"$5}') \ \ <(bcftools query -f '[\t%SAMPLE=%GT]\n' MyVariants.Norm.vcf |\ awk 'BEGIN {print "nHet"} {print gsub(/0\|1|1\|0|0\/1|1\/0/, "")}') \ \ <(bcftools query -f '[\t%SAMPLE=%GT]\n' MyVariants.Norm.vcf |\ awk 'BEGIN {print "nHomAlt"} {print gsub(/1\|1|1\/1/, "")}') \ \ <(bcftools query -f '[\t%SAMPLE=%GT]\n' MyVariants.Norm.vcf |\ awk 'BEGIN {print "nHomRef"} {print gsub(/0\|0|0\/0/, "")}') \ \ <(bcftools view MyVariants.Norm.vcf | awk -F"\t" '/^#CHROM/ {split($0, header, "\t"); print "HetSamples"} \ !/^#CHROM/ {for (i=10; i<=NF; i++) {if (gsub(/0\|1|1\|0|0\/1|1\/0/, "", $(i))==1) {printf header[i]","}; if (i==NF) {printf "\n"}}}') \ \ <(bcftools view MyVariants.Norm.vcf | awk -F"\t" '/^#CHROM/ {split($0, header, "\t"); print "HomSamplesAlt"} \ !/^#CHROM/ {for (i=10; i<=NF; i++) {if (gsub(/1\|1|1\/1/, "", $(i))==1) {printf header[i]","}; if (i==NF) {printf "\n"}}}') \ \ <(bcftools view MyVariants.Norm.vcf | awk -F"\t" '/^#CHROM/ {split($0, header, "\t"); print "HomSamplesRef"} \ !/^#CHROM/ {for (i=10; i<=NF; i++) {if (gsub(/0\|0|0\/0/,"", $(i))==1) {printf header[i]","}; if (i==NF) {printf "\n"}}}') \ \ | sed 's/,\t/\t/g' | sed 's/,$//g' CHR POS ID REF ALT nHet nHomAlt nHomRef HetSamples HomSamplesAlt HomSamplesRef 1 10177 1:10177:10177:A:AC A AC 4 0 1 HG00096,HG00097,HG00099,HG00100 HG00101 1 10235 1:10235:10235:T:TA T TA 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 10352 1:10352:10352:T:TA T TA 5 0 0 HG00096,HG00097,HG00099,HG00100,HG00101 1 10616 1:10616:10616:CCGCCGTTGCAAAGGCGCGCCG:C CCGCCGTTGCAAAGGCGCGCCG C 0 5 0 HG00096,HG00097,HG00099,HG00100,HG00101 1 10642 1:10642:10642:G:A G A 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 11008 1:11008:11008:C:G C G 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 11012 1:11012:11012:C:G C G 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 11063 1:11063:11063:T:G T G 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13110 1:13110:13110:G:A G A 1 0 4 HG00097 HG00096,HG00099,HG00100,HG00101 1 13116 1:13116:13116:T:G T G 2 0 3 HG00097,HG00101 HG00096,HG00099,HG00100 1 13118 1:13118:13118:A:G A G 2 0 3 HG00097,HG00101 HG00096,HG00099,HG00100 1 13273 1:13273:13273:G:C G C 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13284 1:13284:13284:G:A G A 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13380 1:13380:13380:C:G C G 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13483 1:13483:13483:G:C G C 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13494 1:13494:13494:A:G A G 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 13550 1:13550:13550:G:A G A 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 14464 1:14464:14464:A:T A T 1 1 3 HG00099 HG00096 HG00097,HG00100,HG00101 1 14599 1:14599:14599:T:A T A 2 0 3 HG00097,HG00099 HG00096,HG00100,HG00101 1 14604 1:14604:14604:A:G A G 2 0 3 HG00097,HG00099 HG00096,HG00100,HG00101 1 14930 1:14930:14930:A:G A G 5 0 0 HG00096,HG00097,HG00099,HG00100,HG00101 1 14933 1:14933:14933:G:A G A 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 15211 1:15211:15211:T:G T G 5 0 0 HG00096,HG00097,HG00099,HG00100,HG00101 1 15245 1:15245:15245:C:T C T 0 0 5 HG00096,HG00097,HG00099,HG00100,HG00101 1 15274 1:15274:15274:A:G A G 3 0 2 HG00096,HG00100,HG00101 HG00097,HG00099 1 15274 1:15274:15274:A:T A T 3 2 0 HG00096,HG00100,HG00101 HG00097,HG00099
biostars
{"uid": 298361, "view_count": 10138, "vote_count": 9}
**Goal** I am trying to use the FDist2 methodology. [Lositan][1] seems to not be working anymore. [Arlequin](http://cmpg.unibe.ch/software/arlequin35/Arl35Downloads.html) has an implementation of FDist2, so I am now trying to use FDist2 via Arlequin via the command line. Note that I am on MAC OSX. If I use the command line, I can also do it on Linux. **Issue** I will have to specify in the .ars file that I want Fdist2 to be performed. Each specific requirement has a keyword that has to be associated to a value (typically o or 1) in the .ars file. For example `ComputeStandardDiversityIndices=1` and `LocByLocAMOVA=1` can be set to 1 as shown if we want these specific statistics to be computed. In the `arlsumstat_readme.txt`, there is a list of keywords called `Keywords in ssdefs.txt` (I don't really know what ssdefs.txt is) but these keywords don't match those used in the example files found in the .zip file for Arlecore (typically even AMOVA is absent) and none of these keywords seem to be described as what I want to do. I can't find any of these keywords in the manual either. In the .zip file for Arlecore, there are a number of examples, incl. one called detSel which sounds exactly like what I would like to do but for some reason the .ars file is absent! **Questions** My questions are - Where can I find these keywords? - More specifically, what is the keyword for using Fdist2? - Could you provide me with a reproducible example of a FDist2 analysis? [1]: http://popgen.net/soft/lositan/
Facing similar issues, [Whitlock and Lotterhos 2014][1] have written their own implementation of Fdist2. It is available via [this Dryad repository][2]. It is not quite really user friendly but it comes with a R wrapper code that is nicely commented. [1]: http://onlinelibrary.wiley.com/doi/10.1111/mec.12725/full [2]: http://datadryad.org/resource/doi:10.5061/dryad.v8d05
biostars
{"uid": 263837, "view_count": 2952, "vote_count": 1}
I'm analysing a microarray (single-channel) study with 2 experimental groups, each one with only 2 samples. I wanted to compare the two groups in order to find DE genes, so, I ran limma analysis on this dataset. The results were weird...lots of genes w/ very low p-values (lowest p-value: 6.144e-213). I made a volcano plot to see what was happening with the data...got this **weird** volcano plot: http://s33.postimg.org/mxnx39j67/weird_volcano.png Do you guys know what kind of stuff would lead to that? >print(annot) GEO_ID group 1 GSM321605 uninfected 2 GSM321606 uninfected 3 GSM321607 infected 4 GSM321608 infected >design <- model.matrix(~0 +annot[, "group"]) >head(design) annot[, "group"]infected annot[, "group"]uninfected 1 0 1 2 0 1 3 1 0 4 1 0 >colnames(design) <- c("infected", "uninfected") >cm <- makeContrasts(InfvsUninf = infected-uninfected, levels=design) >print(cm) Contrasts Levels InfvsUninf infected 1 uninfected -1 >fit <- lmFit(exprs, design) >fit2 <- contrasts.fit(fit, cm) >fit2 <- eBayes(fit2) >results <- topTable(fit2, "InfvsUninf", number=Inf) >head(results) ID logFC AveExpr t P.Value adj.P.Val IDO1 7.496098 8.077363 31.43580 6.144487e-213 8.017326e-209 TNFAIP6 6.945901 7.551685 29.12848 1.359307e-183 8.868116e-180 RSAD2 6.532472 8.066029 27.39471 6.405851e-163 2.786118e-159 IFI27 6.230206 7.727492 26.12712 1.461824e-148 4.768471e-145 IFITM1 6.128964 6.775055 25.70255 6.766299e-144 1.765733e-140 INHBA 5.915550 7.068516 24.80758 2.674626e-134 5.816421e-131
I think one of the reasons that you are getting weird p-values is because the **filtering of the dataset is not proper** or the standard error across all the probes in your dataset is large. Limma calculates **moderated t-statistic** which uses posterior variance rather than sample variance in t-test. Or, in the layman language it is is the ratio of the M-value to its standard error similar to T-statistic. However, in this case the standard errors are moderated across all genes/ probes in your `exprs`. You may find more about it [here][1]. I would recommend you to use all the probes or use `varFilter` to filter the low expressed probes/genes. Code: > dim(exprs) [1] 54675 4 > design <- model.matrix(~ group + 0) > colnames(design) <- levels(group) > head(design) uninfected infected 1 1 0 2 1 0 3 0 1 4 0 1 > cont.matrix <- makeContrasts(InfvsUninf = infected-uninfected, levels=design) > print(cont.matrix) Contrasts Levels InfvsUninf infected 1 uninfected -1 > fit1 <- lmFit(exprs, design) > fit2 <- contrasts.fit(fit1, cont.matrix) > fit2 <- eBayes(fit2) Warning message: Zero sample variances detected, have been offset > results <- topTable(fit2, "InfvsUninf", number=nrow(exprs.dat)) > head(results) logFC AveExpr t P.Value adj.P.Val B 210029_at 7.496098 8.077363 53.69020 5.996081e-09 0.0001206758 9.926007 202411_at 6.230206 7.727492 52.40836 6.882552e-09 0.0001206758 9.869859 201601_x_at 5.956054 7.549371 52.02982 7.173197e-09 0.0001206758 9.852669 222838_at 5.748147 7.996823 49.34098 9.709206e-09 0.0001206758 9.721920 206025_s_at 6.945901 7.551685 48.24558 1.103574e-08 0.0001206758 9.663952 201108_s_at 5.564837 7.035012 45.45633 1.549903e-08 0.0001401571 9.502414 Now, filtering the low variable probes and hence, the value of moderate t-statistic (t column) changes. P-value is dependent upon t value and therefore, it become lower than the previous one. > library(genefilter) > exprs = varFilter(exprs) > dim(exprs) [1] 27337 4 > design <- model.matrix(~ group + 0) > colnames(design) <- levels(group) > head(design) uninfected infected 1 1 0 2 1 0 3 0 1 4 0 1 > cont.matrix <- makeContrasts(InfvsUninf = infected-uninfected, levels=design) > print(cont.matrix) Contrasts Levels InfvsUninf infected 1 uninfected -1 > fit1 <- lmFit(exprs, design) > fit2 <- contrasts.fit(fit1, cont.matrix) > fit2 <- eBayes(fit2) > results <- topTable(fit2, "InfvsUninf", number=nrow(exprs.dat)) > head(results) logFC AveExpr t P.Value adj.P.Val B 210029_at 7.496098 8.077363 33.85290 3.055145e-50 8.351851e-46 103.28349 206025_s_at 6.945901 7.551685 31.34093 1.140235e-47 1.558530e-43 97.62137 213797_at 6.532472 8.066029 29.35202 1.661339e-45 1.513868e-41 92.82696 202411_at 6.230206 7.727492 28.24293 3.021780e-44 2.065160e-40 90.02318 204698_at 6.131048 7.734954 27.51300 2.145762e-43 1.173174e-39 88.12390 214022_s_at 6.128964 6.775055 27.39214 2.980586e-43 1.358005e-39 87.80514 In case you are interested in top ~ 5000 genes/probes only then use `varFilter(exprs, var.cutoff = 0.9)`. Try playing with various different values of var.cutoff to get desired number of genes. [1]: https://stat.ethz.ch/pipermail/bioconductor/2004-September/006132.html
biostars
{"uid": 195292, "view_count": 4163, "vote_count": 2}
<p>Hi,</p> <p>Would you please recommend me a good software for searching domains in a protein or coding nucleotide sequence? (except NCBI conserved domain search). Your help is greatly appreciated.</p>
Did you consider [PFAM][1] database? **'Domain Organizations'** are very beautifully prepared. Look at [this][2] You can search what domains are present in a protein sequence with `hmmscan` tool, searching the sequence against a [HMM][3] database of PFAM [1]: http://pfam.xfam.org/ [2]: http://pfam.xfam.org/family/pf00406#tabview=tab1 [3]: ftp://ftp.ebi.ac.uk/pub/databases/Pfam/current_release
biostars
{"uid": 142260, "view_count": 3024, "vote_count": 1}
<p>I have the following input:</p> <p>(1) List of protein-coding gene models in GFF3 format (2) List of protein features of these genes (e.g. protein domains or transmembrane regions), with coordinates on the protein sequence-level. </p> <p>As output, I would like to have an image that shows all gene models drawn to scale side-by-side, with protein features mapped onto coding exons. Different feature types should be drawn in different colors (configurable), and features are allowed to span exon-exon junctions. Ideally, the generation of such an image can be completely automated.</p> <p>Example image:</p> <p><img src='http://s13.postimage.org/wswaeu4nr/gene_models.png' alt='alt text' /></p> <p>Can anyone suggest a program that can do this?</p>
<p>I ended up implementing two new BioPerl modules myself that, when used in combination, solve this problem. I just uploaded both modules to GitHub:</p> <ul> <li><p><a href='https://github.com/Gig77/Bio-Graphics-Glyph-decorated_gene'>Bio::Graphics::Glyph::decorated_gene</a></p></li> <li><p><a href='https://github.com/Gig77/Bio-Draw-FeatureStack'>Bio::Draw::FeatureStack</a></p></li> </ul> <p>Quoting from the module descriptions:</p> <blockquote> <p><strong>Bio::Graphics::Glyph::decorated_gene</strong> - A GFF3-compatible gene glyph with protein decorations. </p> <p>This glyph extends the functionality of the Bio::Graphics::Glyph::gene glyph and allows to draw protein decorations (e.g., signal peptides, transmembrane domains, protein domains) on top of gene models. Currently, the glyph can draw decorations in form of colored or outlined boxes inside or around CDS segments. Protein decorations are specified at the 'mRNA' transcript level in protein coordinates. Protein coordinates are automatically mapped to nucleotide coordinates by the glyph. Decorations are allowed to span exon-exon junctions, in which case decorations are split between exons. By default, the glyph automatically assigns different colors to different types of protein decorations, whereas decorations of the same type are always assigned the same color.</p> </blockquote> <p>and</p> <blockquote> <p><strong>Bio::Draw::FeatureStack</strong> - BioPerl module to generate GD images of stacked gene models</p> <p>FeatureStack creates GD images of vertically stacked gene models to facilitate visual comparison of gene structures. Compared genes can be clusters of orthologous genes, gene family members, or any other genes of interest. FeatureStack takes an array of BioPerl feature objects as input, projects them onto a common coordinate space, flips features on the negative strand (optional), left-aligns them by start coordinates (optional), sets a fixed intron size (optional), removes unwanted transcripts/isoforms (optional), and then draws the so transformed features with a user-specified glyph. Output images can be generated in SVG (scalable vectorized image) or PNG (rastered image) format.</p> </blockquote> <p>Here is an example output of FeatureStack: <img src='http://genome.sfu.ca/projects/featurestack/screenshot.png' alt='FeatureStack example output showing RFX genes across a diverse set of species' /></p>
biostars
{"uid": 16337, "view_count": 6644, "vote_count": 5}
I am looking for tools that would help process multiple alignment of protein sequences and extract polymorphic sites only--based on reference sequence. Well, what I have in mind is an output of something matrix-like like so: ``` Position 2 10 20 45 60 63 RefSeq Val Leu Ala Ser Phe Thr Seq1 Met Tyr Seq2 Gly Pro Seq3 Ala Arg ``` Any suggestion for tools or scripts(preferably python) that can help me achieve this fast? Thanks.
This looks like a task you can write up a script by yourself before finding a tool to do the job for you. Assuming you have sequences saved in a python list and the reference is the last one, here's something you could do... ``` seqs = do_your_magic_to_get_the_sequences() ref = seqs[-1] diffs_top, diffs = [], [] all_diffs = set() for s in seqs[:-1]: diffs = [] for i, c in enumerate(s): if s[i] != ref[i]: diffs.append(i) all_diffs.add(i) diffs_top.append(diffs) for d in all_diffs: print "%-3d" % int(d+1), print for c in ref: print "%-3s" % c, print for i, s in enumerate(seqs[:-1]): for j, c in enumerate(s): if j in diffs_top[i]: print "%-3s" % c, else: print "%-3s" % " ", print ``` The print out of an example data would then be like: ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 M L Y M V A R V A Y K K N P V P R F M G L A B K L P P K B C N P Y F C C L C ```
biostars
{"uid": 108951, "view_count": 1862, "vote_count": 1}
Hi, I have PDX RNA-seq data.I want to separate the reads of mouse and human,before comparing with tumor RNA-seq data. Is there any tool to do that? After removing the mouse reads I want to do differential expression between primary tumors and PDX tumors. Thanks, Ron
[Xenome][1] works pretty well. [1]: http://www.ncbi.nlm.nih.gov/pubmed/?term=22689758
biostars
{"uid": 143019, "view_count": 10234, "vote_count": 4}
I have a VCF file with ~30K sites across 131 samples. I am trying to make it include only variant sites, meaning I want to exclude loci where all of my 131 samples have the same genotype, regardless of what the reference allele is. I used GATK SelectVariants with the -env tag, but that only excludes sites where all samples are 0/0, not sites where all samples are 1/1 (homozygous reference.) I am a pretty terrible coder and struggle to modify VCF files. My question is: Does anybody have a script or know of a tool that can remove the entire site (line) if all 131 samples (columns?) have 1/1 in the genotype position? Or more generally, if all samples have the same genotype at that site, whether it be 0/0, 0/1, or 1/1 (GATK can do the 0/0 and 0/1, but if it's easier to kill 3 birds with one stone then no problem). Thanks! Alex
using VCFFilterjs: http://lindenb.github.io/jvarkit/VCFFilterJS.html java -jar dist/vcffilterjs.jar -e 'function accept(v) {var g0= v.getGenotype(0);for(var i=1;i< v.getNSamples();i++) {if(!v.getGenotype(i).sameGenotype(g0)) return true;} return false;}accept(variant);' input.vcf
biostars
{"uid": 260344, "view_count": 4151, "vote_count": 1}
Hi everyone, I have two unequal data sets. A data set V1 V2 V3 6 42721754 42721769 6 42721757 42721772 6 42721760 42721775 6 42721763 42721778 6 42721766 42721781 6 42721769 42721784 6 42721772 42721787 6 42721775 42721790 B data set V2 AF 42721757 0.003067485 42721760 0.006134969 42721763 0.006134969 42721766 0.003067485 42721769 0.006134969 42721772 0.006134969 42721775 0.003067485 42721778 0.006134969 42721781 0.003067485 42721784 0.003067485 42721787 0.009202454 42721790 0.009202454 I want to check if the value of `V2` in `B data set` falls between the values of `V2` and `V3` of `A data set`. And when its true, I want value of `AF` in `B data set` to be added as a new column in `A dataset`. There are two important points. 1. `B data set` is 80 rows while `A data set` is 6000 rows. 2. value of `V2` in `B` may be repeated, which shouldn't get thrown out in final output. an urgent help with be appreciated. Thanks,
Using **R** *data.table*: library(data.table) # example data A <- fread(text = " V1 V2 V3 6 42721754 42721769 6 42721757 42721772 6 42721760 42721775 6 42721763 42721778 6 42721766 42721781 6 42721769 42721784 6 42721772 42721787 6 42721775 42721790") B <- fread(text = " V2 AF 42721757 0.003067485 42721760 0.006134969 42721763 0.006134969 42721766 0.003067485 42721769 0.006134969 42721772 0.006134969 42721775 0.003067485 42721778 0.006134969 42721781 0.003067485 42721784 0.003067485 42721787 0.009202454 42721790 0.009202454") # set the same keys for both A and B, start and end positions. # If we have multiple chromosomes, we need to include chrom as key, too. setDT(A, key = c("V2", "V3")) setDT(B) B <- B[, .(V2, V3 = V2, AF)] setkeyv(B, c("V2", "V3")) # merge with overlap, check the "type" argument for merge types, see ?foverlaps res <- foverlaps(B, A) head(res) # V1 V2 V3 i.V2 i.V3 AF # 1: 6 42721754 42721769 42721757 42721757 0.003067485 # 2: 6 42721757 42721772 42721757 42721757 0.003067485 # 3: 6 42721754 42721769 42721760 42721760 0.006134969 # 4: 6 42721757 42721772 42721760 42721760 0.006134969 # 5: 6 42721760 42721775 42721760 42721760 0.006134969 # 6: 6 42721754 42721769 42721763 42721763 0.006134969
biostars
{"uid": 402757, "view_count": 1315, "vote_count": 1}
Hi Everyone, How can I extract promoter sequences (ca. 1000bp upstream TSS) in multi-fasta format from genome (also multi-fasta with scaffolds) using information from corresponding GFF file? I've already tried to use GFF-Ex tool, however it didn't help (finished with errors). It is tobacco genome (ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/715/135/GCF_000715135.1_Ntab-TN90/). Does anyone know some other tools for this? Thanks,
Finally, I've solved this problem by combining samtools, bedtools as well as custom R script. The pipline placed into bash script is available [here](https://github.com/RimGubaev/extract_promoters).
biostars
{"uid": 295404, "view_count": 4079, "vote_count": 1}
Hi all, I am struggling to make a contrast with EdgeR. design <- model.matrix(~Gender + Age + Group, data = patient.table) The (disease) Group is a factor with [Control, Mild, Severe]. I want to compare C-M. M-S and C-S. Thus s i DGEL <- edgeR::DGEList(expressionData) keep <- edgeR::filterByExpr(DGEL) DGEL <- DGEL[keep, , keep.lib.sizes=FALSE] DGEL <- edgeR::calcNormFactors(DGEL, method = "TMM") DGEL <- edgeR::estimateDisp(DGEL, design) fit <- edgeR::glmQLFit(DGEL, design) contrasts <- makeContrasts(c.vs.m = GroupC - GroupM, m.vs.s = GroupM - GroupS, c.vs.s = GroupC - GroupS, levels = design) However, there is no GroupC in the model matrix. There is only an Intercept, GroupM and GroupS. How can I compare anything against the control group on a full-rank design matrix. I already searched in the user manual and here on the website, but could not find it. Also, I tried design <- model.matrix(~Gender + Age + Group, data = patient.table, contrasts.arg = list( Group = contrasts(patient.table$Group, contrasts = FALSE) ) ) But edgeR did not like that and informed me that the table was not of full rank. I am pretty sure this is something basic, but I just don't get it as a wet-lab worker.
Check [edgeR Users Guide][1], chapter 3.3, on how to define the contrasts for such a complex design. For me, the approach that works better (as in, it is easier to understand) is the one described in section 3.3.1. Start by creating a composite `group` variable: group <- factor( paste0( patient.table$Gender, paste0( patient.table$Age, patient.table$Group ) ) Then specify the design without an intercept, and create the contrasts of interest. design <- model.matrix( ~0 + group ) colnames( design ) <- levels( group ) contrasts <- makeContrasts( GenderM_Age0_GroupC_vs_M = GenderMAge0GroupC - GenderMAge0GroupM, # and so on for the other contrasts levels=design) To perform the tests: fit <- glmQLFit( DGEL, design ) qlf <- glmQLFTest( fit, contrast = contrastss[ , "GenderM_Age0_GroupC_vs_M" ] ) This tests for **GroupC** vs **GroupM** differences within **GenderM** and **Age0**. See https://www.biostars.org/p/453485/ for a recent similar post, you may find the answer there useful. If you want more complex tests, such as any genes changing over all ages, read sections 3.3.3 and 3.3.4 from the guide. [1]: http://bioconductor.org/packages/3.12/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf
biostars
{"uid": 453513, "view_count": 2165, "vote_count": 1}
Hi Guys, Im new to RNA-seq and R-programming so forgive my ignorance in advance! I'm currently using a programme/script to help me map tRNAs (the supplied notes don't explain in much detail), and after tRNA counts are generated in all my conditions, they use SizeFactors to normalize the dataset (in DESeq2). I've tried to read up on what exactly SizeFactors are and I don't understand it. Could anyone give me an easy to understand definition of what size factors are and why they're used to normalize the data.
A size factor relates to how many reads there are in each library. One can imagine that if you had two sample where 10% of the reads in each sample were from gene A, but in one sample 1M reads have been sequenced and in the other sample 2M reads had been sequenced then there would be a two fold increase in the counts from gene A in sample 2 compared to sample 1, but the actaul expression levels were the same. Early RNAseq analysis divided counts by the total number of reads in each library, but this is poor practice for two reasons. 1. Using division means that you lose the discrete nature of the gene counts, and thus negative bionomial statisitcs no longer apply. Thus normalising factors are used as offsets in the linear model, rather than divisors. 2. In most RNAseq samples the most higly expressed genes take up the majority of the reads. Thus in a 1M read sample, if 300k reads came from gene A (leaving 700k for all the others), and that gene doubled in expression to 600k reads (leaving 400k for all the others), the expression of the other genes would appear to half, even though they have stayed the same. Thus sizeFactors are related to the library size (total number of reads in the library), but are calculated in such a way as to compensate for effect 2 above. One common method (and the one I believe that is used by DESeq2), is to find the 75th centile of the distribution of read counts for each sample, and then calculate a normalisation factor such that the 75th centile is the same across all samples.
biostars
{"uid": 359060, "view_count": 3849, "vote_count": 4}
I have got some RNAseq data from Mus Musculus using TruSeq Stranded Total RNA in paired-end. I want to count my reads over my genes regarding to the gene orientation. As if a gene is plus strand, reads on forward strand falling into my plus gene area should be counted and if a gene is minus strand, reads on reverse strand falling into my minus gene area should be counted. Let's take two examples from Ensembl : - `Hmgb2` : `ENSMUSG00000054717.7` `chr8:57,511,907-57,515,999` `forward strand` http://www.ensembl.org/Mus_musculus/Gene/Summary?db=core;g=ENSMUSG00000076617;r=12:113418558-113422730 - `Ighm` : `ENSMUSG00000076617.9` `chr12:113,418,558-113,422,730` `reverse strand` http://www.ensembl.org/Mus_musculus/Gene/Summary?db=core;g=ENSMUSG00000054717;r=8:57511907-57515999 I created my index with a [gencode reference genome][1] and I did my alignments using STAR with the `--quantMode` and [an annotation file][2] from gencode. I've got my `_Aligned.sortedByCoord.out.bam` and `_ReadsPerGene.out.tab`. Check the strand of my 2 genes into my annotation file : zgrep -i --color "Hmgb2" gencode.vM18.chr_patch_hapl_scaff_and_23_custom_igh_gff3sort.annotation.gtf.gz #chr8 HAVANA gene 57511907 57515999 . + . gene_id "ENSMUSG00000054717.7"; gene_type "protein_coding"; gene_name "Hmgb2"; level 2; havana_gene "OTTMUSG00000060717.1"; zgrep -i --color "Ighm" gencode.vM18.chr_patch_hapl_scaff_and_23_custom_igh_gff3sort.annotation.gtf.gz #chr12 HAVANA gene 113418558 113422730 . - . gene_id "ENSMUSG00000076617.9"; gene_type "IG_C_gene"; gene_name "Ighm"; level 2; havana_gene "OTTMUSG00000051485.2"; Ok, good I have `Hmgb2` plus strand and `Ighm` minus strand. ---------- Check the read strand under IGV for these two genes : `Hmgb2` <a href="https://ibb.co/WGcyV8s"><img src="https://i.ibb.co/9cqTZKG/biostars1.png" alt="biostars1" border="0"></a> `Ighm` <a href="https://ibb.co/cyLqxSy"><img src="https://i.ibb.co/nwCpRNw/biostars2.png" alt="biostars2" border="0"></a> `Hmgb2` gets a lot of paired-read F2R1 (blue reads) and vice-versa `Ighm` gets a lot of paired-read F1R2 (red reads) Note : As paired-end Illumina sequencing is R1 forward and R2 reverse, I was expecting R1 to lead the forward strand, which is not my case but anyway, it is not the point, R2 is leading the forward strand. ---------- Reading the documentation of [STAR][3], part `7.Counting number of reads per gene.` > STAR outputs read counts per gene into ReadsPerGene.out.tab file with 4 columns which correspond to different strandedness options: > - column 1: gene ID > - column 2: counts for unstranded RNA-seq > - column 3: counts for the 1st read strand aligned with RNA (htseq-count option -s yes) > - column 4: counts for the 2nd read strand aligned with RNA (htseq-count option -s reverse) >Select the output according to the strandedness of your data. Note, that if you have stranded data and choose one of the columns 3 or 4, the other column (4 or 3) will give you the count of antisense reads. So, what I get is that, either the 3rd column give me my sense count or antisense, and the 4th will give me the opposite strand result. Fine. But I want to know if STAR take the gene strand into account. I've found this threa [thread][4] (with $2 = column2, $3 = column3, $4 = column4) >$2 is for unstranded hits, but those overlapping on opposite strand of features are considered ambiguous. $3 reports hits based on the strand you have given in your gff annotation, and $4 in the reverse direction of your features in gff (for PE-data the 5'3'-direction is also considered). Refer to -s option of htseq-count ---------- So, I was expecting to get high count of `Hmgb2` in either column 3 or 4 and high count of `Ighm` in the other column `Hmgb2` / `ENSMUSG00000054717.7` grep "ENSMUSG00000054717.7" file_ReadsPerGene.out.tab #ENSMUSG00000054717.7 3400 31 3369 `Ighm` / `ENSMUSG00000076617.9` grep "ENSMUSG00000076617.9" file_ReadsPerGene.out.tab #ENSMUSG00000076617.9 16063 11 16052 All my high counts are in the 4th column... Did I forgot to tweak some options ? [1]: ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_mouse/release_M20/GRCm38.primary_assembly.genome.fa.gz [2]: ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_mouse/release_M20/gencode.vM20.chr_patch_hapl_scaff.annotation.gtf.gz [3]: http://chagall.med.cornell.edu/RNASEQcourse/STARmanual.pdf [4]: https://www.biostars.org/p/218995/
Most stranded libraries produced in the past ~7 years should correspond to the counts in the last column, wherein read 2's orientation matches that of the transcript. The description you linked to is wrong. Column 2 is for unstranded libraries, so the strand of a feature on the genome is ignored. Column 3 assumes that the first read in a pair's orientation matches that of the originating fragment (this is rarely the case) and column four the same but for read 2. TruSeq kits use the standard method, so the last column is correct.
biostars
{"uid": 367786, "view_count": 2562, "vote_count": 1}
Hello everyone, I want to use awk to count number of values within a range in a specific column, and to print ranges with the count in output. For example, 4th column of my input is like this 1000 4000 5001 7050 4000 3500 With bin size of 5000, I want to know the number of values in range 1 - 5000, 5001 - 10000 and so on. My values are greater than 100,000. I want to count the values as many times as they appear. For example 4000 should be counted twice. The output should be showing two columns showing ranges and the count like 1000-5000 4 5000-10000 2 I found many relevant answers on different forums but nothing exactly serves my purpose. I found one code but it is counting duplicate values only once. Thanks
Using **awk**: $ awk '{print (int($1/5000)+1)*5000}' temp.txt | sort -g | uniq -c # 4 5000 # 2 10000 Using **R**: # example input x <- c(1000, 4000, 5001, 7050, 4000, 3500) # using cut into 5K intervals table(cut(x, seq(0, 10000, 5000))) # (0,5e+03] (5e+03,1e+04] # 4 2 # using round table(round(x, -4)) # 0 10000 # 4 2
biostars
{"uid": 419409, "view_count": 2443, "vote_count": 1}
In this graph 81 cells have being ordered by pseudo time. For example cell s0.76 has been placed at the start of pseudo time and cell s0.11 placed at the end of pseudo time. ![enter image description here][1] This is the expression profile of this gene on this pseudo time > Data s0.76 s0.52 s0.72 s0.36 s0.18 s0.60 s0.54 s0.65 s0.56 s0.4 s0.14 s0.67 s0.1 s0.22 s0.28 s0.80 s0.16 s0.32 6354.316 2376.278 8037.063 4879.946 4989.872 5382.370 13239.227 12085.752 5510.481 9311.501 5303.664 13480.634 9085.402 9585.708 12678.642 8595.338 13901.138 9809.339 s0.8 s0.55 s0.71 s0.41 s0.40 s0.48 s0.68 s0.81 s0.29 s0.78 s0.34 s0.42 s0.13 s0.9 s0.7 s0.24 s0.62 s0.35 12554.518 11001.622 6676.949 9266.956 5303.998 13405.392 8327.440 6903.236 9319.111 4427.627 11029.422 5370.687 4854.454 13199.522 5125.644 11687.103 7546.622 8656.206 s0.23 s0.21 s0.37 s0.25 s0.79 s0.64 s0.44 s0.15 s0.43 s0.63 s0.10 s0.6 s0.19 s0.31 s0.38 s0.39 s0.73 s0.59 4578.953 15771.795 3217.201 6362.859 11523.728 5977.613 4805.703 5373.737 2393.996 9104.929 3532.215 7974.742 4949.271 3285.566 10896.503 3746.348 4296.337 6510.001 s0.2 s0.12 s0.61 s0.66 s0.51 s0.58 s0.50 s0.27 s0.77 s0.69 s0.33 s0.17 s0.74 s0.26 s0.57 s0.49 s0.70 s0.53 3728.896 8761.615 3896.014 2618.151 4106.299 11898.634 2978.410 3129.880 5571.995 4314.655 3867.158 2209.066 1723.784 2741.320 4570.466 5506.473 3120.334 7284.682 s0.30 s0.45 s0.5 s0.47 s0.46 s0.75 s0.20 s0.3 s0.11 9454.997 7446.095 4765.971 7861.282 10393.357 4809.735 12421.501 12615.785 22238.399 Now, my question is, how I could color the plot by knowing a set of cells are in G2 and another set are in S/M phase of cell cycle like this picture? ![enter image description here][2] These cells are at G2 [1] "s0.1" "s0.2" "s0.3" "s0.6" "s0.7" "s0.8" "s0.9" "s0.10" "s0.12" "s0.13" "s0.14" "s0.15" "s0.16" "s0.19" "s0.21" "s0.22" "s0.23" "s0.24" "s0.25" "s0.28" "s0.29" "s0.31" [23] "s0.32" "s0.34" "s0.35" "s0.37" "s0.38" "s0.39" "s0.42" "s0.43" "s0.44" "s0.48" "s0.50" "s0.51" "s0.54" "s0.55" "s0.56" "s0.59" "s0.60" "s0.61" "s0.62" "s0.63" "s0.64" "s0.65" [45] "s0.66" "s0.67" "s0.68" "s0.70" "s0.71" "s0.73" "s0.78" "s0.79" "s0.80" "s0.81" these cells are in M/S [1] "s0.4" "s0.5" "s0.11" "s0.17" "s0.18" "s0.20" "s0.26" "s0.27" "s0.30" "s0.33" "s0.36" "s0.40" "s0.41" "s0.45" "s0.46" "s0.47" "s0.49" "s0.52" "s0.53" "s0.57" "s0.58" "s0.69" [23] "s0.72" "s0.74" "s0.75" "s0.76" "s0.77" I tried to do that by this codes but the order of cells is not based on the pseudo time > head(data) value cell capture 1 30.59486 s0.76 G2 2 78.12876 s0.52 G2 3 63.36236 s0.72 G2 4 54.48303 s0.36 m/s 5 196.49463 s0.18 m/s 6 59.76789 s0.60 G2 > ![enter image description here][3] [1]: https://image.ibb.co/gEKXsK/Rplot.jpg [2]: https://image.ibb.co/c5PJzz/btw372f3p.jpg [3]: https://image.ibb.co/gptvCK/Rplot01.jpg > p=ggplot(data, aes(x=cell, y=value, color=capture)) > p+geom_point(alpha=.3)
p+geom_point(alpha=.3)+scale_x_discrete(limits=as.factor(data$cell))
biostars
{"uid": 338110, "view_count": 987, "vote_count": 1}
<p>Recently, I searching paper about time series or time course, I found that these two words equally used by researchers. However, time series seems used more wider. wikipedia also use time series (https://en.wikipedia.org/wiki/Time_series). What the difference between these two phrases? Is there any specific context that one phrases more frequently used?</p> <p>Best,</p> <p>Pengcheng</p>
<p>They&#39;re the same thing. You&#39;ll find that there are many words and phrases in English that mean the same thing but happen to be used more in one group than another (e.g., in my original field of study, a &quot;time series analysis&quot; could sometimes be called a &quot;non-stationary analysis&quot;).</p>
biostars
{"uid": 158802, "view_count": 10767, "vote_count": 2}
We are in the process of developing analysis pipelines for WGS,RNA and Methyl-seq data. The first projects considers 100 patients with a common disease. All the analysis would be run on data from these patients. Our institute's computational center is offering us 5 nodes (24 CPU core, 2.6GHz clock speed, 128GB RAM memory each), I believe that necessary storage space would also be provided to us. Based on your experiences would you please tell me if having 5 nodes is going to be sufficient, I understand that there cannot be one single answer to this question but any help from you all is much appreciated.
The two big things are 1) How much ram does a single instance of the "largest" program you'll run need? This is the only hard requirement. When I've run trinity I've needed 1TB of ram before. If I didn't have a machine with 1TB of RAM, I couldn't have run Trinity. If the program with the highest memory consumption will fit on 124gb, then that's fine. 2) How long are you willing to wait for things to run? More nodes means you can run more stuff at once. You don't really *need* them. Figure out which is more important to you, time or money? Spend more money, get more nodes, higher throughput. Save money, get fewer nodes, spend more time waiting for results.
biostars
{"uid": 197947, "view_count": 1436, "vote_count": 1}
I have f3 statistics in the form f3(Outgroup:POI_1, Comparative) and f3(Outgroup:POI_2, Comparative) and so on. For a handful of pairs of POIs (populations of interest) and over a hundred comparative populations For each pair of POIs, I want to make a scatter plot, where one f3 value is on the X axis and the others is on the Y axis, but I want both sets of standard errors graphed. Basically, I want some conceptually similar to this plot from Allentoft et al. 2015 (http://www.nature.com/nature/journal/v522/n7555/fig_tab/nature14507_SF2.html).
<p>This is really an R &quot;question&quot;, but <a href="http://stackoverflow.com/questions/9231702/ggplot2-adding-two-errorbars-to-each-point-in-scatterplot">here you go</a>.</p>
biostars
{"uid": 173611, "view_count": 3254, "vote_count": 1}