INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
Hello, I have a fasta file with lines of format - >FBti0019256 type=transposable_element; loc=2L:22300300..22304444; name=invader2{}555; dbxref=FlyBase_Annotation_IDs:TE19256,FlyBase:FBti0019256; MD5=d9259a0e33aad699215e64916bd47a5b; length=4145; release=r6.19; species=Dmel; I would like to convert these lines into a bed file of format - chr2L /t 22300300 /t 22304444 /t invader2 Is there a program that can directly perform this conversion or an awk command that can do this easier? Please let me know, thank you for your help.
use an associative array in awk to store each component. I'm just too lazy to extract the chrom/start/end, you get the idea. awk -F ' ' '/^>/ {delete map;for(i=1;i<=NF;i++) {eq=index($i,"=");if(eq==0) continue;key=substr($i,1,eq-1);val=substr($i,eq+1); gsub(/;$/,"",val); map[key]=val;} OFS="\t"; print map["loc"],map["type"],map["name"],map["dbxref"],map["MD5"];}'
biostars
{"uid": 312675, "view_count": 1734, "vote_count": 1}
Hi all, I have paired-end 4 ATAC-seq data (2 replicates for 2 samples). I have done aligning using Bowtie2. I did filter MT reads and duplicates using Picard, then performed peak calling on Bam file using MACS2. Also I did differential peak analysis using deeptools and filter them by FDR<0.05 and abs(2foldchange)>2. After these, I generated density peak heatmaps using deeptools. However, on the figure top on the heatmaps height of peaks are not the same for 4 files although I normalized bam files while converting to bigwig using bamCoverage. My questions are: Should the height of those peaks be the same or slight change is acceptable? If not how can I normalize the data? Should I normalize bam files then do the peak calling again if so which tool you suggest? or diffbind normalization okay? Also, I am really confused about the coverage file normalization and peak normalization. Lastly, as written in this post https://www.biostars.org/p/308976/ how can I downsample each sample? If you could explain these I will appreciate it. ![peak signal heatmap][1] [1]: https://www.dropbox.com/home?preview=image.png
I wouldn't expect the peak heights to be identical, some amount of biological variation is normal. The goal of the normalization should instead be to set the background level to roughly similar values between samples. You do not need to normalize your BAM files, CSAW or DiffBind will take care of that step for you. If you want to downsample you can either use `samtools view -s` if starting from BAM files or `seqtk` if starting from fastq files. This is generally not needed.
biostars
{"uid": 426840, "view_count": 3165, "vote_count": 1}
Hi, I want to try out Velvet-SC in the context of single cell genome assemblies. I downloaded the source code from [their website][1]. [1]: http://bix.ucsd.edu/projects/singlecell/ However, when I try to compile it (using the arguments I also use for standard velvet, e.g "CATEGORIES=6", "MAXKMERLENGTH=120", "BIGASSEMBLY=1", "OPENMP=1"), I get a lot of errors and the make script exits with "Error 1" This is the output i get when running make: ``` MP=1' rm obj/*.o obj/dbg/*.o rm: cannot remove 'obj/dbg/*.o': No such file or directory make: [cleanobj] Error 1 (ignored) cd third-party/zlib-1.2.3; ./configure; make; rm minigzip.o; rm example.o Checking for gcc... Building static library libz.a version 1.2.3 with gcc. Checking for unistd.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf() Checking for vsnprintf() in stdio.h... Yes. Checking for return value of vsnprintf()... Yes. Checking for errno.h... Yes. Checking for mmap support... Yes. make[1]: Entering directory `/home/user/Downloads/velvet-sc/third-party/zlib-1.2.3' gcc -O3 -DUSE_MMAP -c -o example.o example.c gcc -O3 -DUSE_MMAP -c -o adler32.o adler32.c gcc -O3 -DUSE_MMAP -c -o compress.o compress.c gcc -O3 -DUSE_MMAP -c -o crc32.o crc32.c gcc -O3 -DUSE_MMAP -c -o gzio.o gzio.c gcc -O3 -DUSE_MMAP -c -o uncompr.o uncompr.c gcc -O3 -DUSE_MMAP -c -o deflate.o deflate.c gcc -O3 -DUSE_MMAP -c -o trees.o trees.c gcc -O3 -DUSE_MMAP -c -o zutil.o zutil.c gcc -O3 -DUSE_MMAP -c -o inflate.o inflate.c gcc -O3 -DUSE_MMAP -c -o infback.o infback.c gcc -O3 -DUSE_MMAP -c -o inftrees.o inftrees.c gcc -O3 -DUSE_MMAP -c -o inffast.o inffast.c ar rc libz.a adler32.o compress.o crc32.o gzio.o uncompr.o deflate.o trees.o zutil.o inflate.o infback.o inftrees.o inffast.o gcc -O3 -DUSE_MMAP -o example example.o -L. libz.a gcc -O3 -DUSE_MMAP -c -o minigzip.o minigzip.c gcc -O3 -DUSE_MMAP -o minigzip minigzip.o -L. libz.a make[1]: Leaving directory `/home/user/Downloads/velvet-sc/third-party/zlib-1.2.3' mkdir -p obj gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/tightString.c -o obj/tightString.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/run.c -o obj/run.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/splay.c -o obj/splay.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/splayTable.c -o obj/splayTable.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/graph.c -o obj/graph.o src/graph.c: In function 'addBufferToDescriptor': src/graph.c:870:14: warning: variable 'twinDescr' set but not used [-Wunused-but-set-variable] Descriptor *twinDescr; ^ gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/run2.c -o obj/run2.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/fibHeap.c -o obj/fibHeap.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/fib.c -o obj/fib.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/concatenatedGraph.c -o obj/concatenatedGraph.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/passageMarker.c -o obj/passageMarker.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/graphStats.c -o obj/graphStats.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/correctedGraph.c -o obj/correctedGraph.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/dfib.c -o obj/dfib.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/dfibHeap.c -o obj/dfibHeap.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/recycleBin.c -o obj/recycleBin.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/readSet.c -o obj/readSet.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/shortReadPairs.c -o obj/shortReadPairs.o src/shortReadPairs.c: In function 'pushNeighbours': src/shortReadPairs.c:928:8: warning: variable 'lastCandidate' set but not used [-Wunused-but-set-variable] Node *lastCandidate = NULL; ^ gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/locallyCorrectedGraph.c -o obj/locallyCorrectedGraph.o src/locallyCorrectedGraph.c: In function 'tourBusNode_local': src/locallyCorrectedGraph.c:380:8: warning: variable 'destination' set but not used [-Wunused-but-set-variable] Node *destination; ^ src/locallyCorrectedGraph.c: In function 'clipTipsVeryHardLocally': src/locallyCorrectedGraph.c:427:18: warning: variable 'twin' set but not used [-Wunused-but-set-variable] Node *current, *twin; ^ src/locallyCorrectedGraph.c: In function 'correctGraphLocally': src/locallyCorrectedGraph.c:517:8: warning: variable 'index' set but not used [-Wunused-but-set-variable] IDnum index, nodeIndex; ^ gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/graphReConstruction.c -o obj/graphReConstruction.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/roadMap.c -o obj/roadMap.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/preGraph.c -o obj/preGraph.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/preGraphConstruction.c -o obj/preGraphConstruction.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/concatenatedPreGraph.c -o obj/concatenatedPreGraph.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/readCoherentGraph.c -o obj/readCoherentGraph.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/crc.c -o obj/crc.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/utility.c -o obj/utility.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/kmer.c -o obj/kmer.o gcc -Wall -O3 -D MAXKMERLENGTH=120 -D CATEGORIES=6 -c src/scaffold.c -o obj/scaffold.o gcc -Wall -O3 -lm -o velveth obj/tightString.o obj/run.o obj/recycleBin.o obj/splay.o obj/splayTable.o obj/readSet.o obj/crc.o obj/utility.o obj/kmer.o third-party/zlib-1.2.3/*.o obj/readSet.o: In function `convertConfidenceScores': readSet.c:(.text+0x3c47): undefined reference to `pow' readSet.c:(.text+0x3c8a): undefined reference to `pow' collect2: error: ld returned 1 exit status make: *** [velveth] Error 1 ``` Any Ideas/Suggestions?
The significant problem is here: ``` gcc \ -Wall \ -O3 \ -lm \ -o velveth \ obj/tightString.o \ obj/run.o \ obj/recycleBin.o \ obj/splay.o \ obj/splayTable.o \ obj/readSet.o \ obj/crc.o \ obj/utility.o \ obj/kmer.o \ third-party/zlib-1.2.3/*.o ``` Find the link commands in the Makefile, and move `-lm` to the end of the line. You can arrange this at the top of the Makefile by removing `-lm` from `LDFLAGS` and adding it to `Z_LIB_FILES` instead. (Wow, that's forked from a prehistoric version of velvet!)
biostars
{"uid": 112109, "view_count": 2642, "vote_count": 1}
Hello everyone, I'm working with data that was performed using Agilent-014850 Whole Human Genome Microarray 4x44K G4112F arrays and the One Color Quick Amp Labeling Kit (Agilent Technologies, Santa Clara, USA). I'm trying to set up a code in R: #Load limma library(limma) #Set-up targetinfo <- readTargets("Targets.txt",row.names="FileName",sep="") #Read files project <- read.maimages(targetinfo,source="agilent", green.only=TRUE) #Background correction project.bgc <- backgroundCorrect(project, method="normexp", offset=16) #Normalize the data with the 'quantile' method for 1-color project.NormData <-normalizeBetweenArrays(project.bgc,method="quantile") #Create the study design and comparison model design <- paste(targetinfo$Target, sep="") design <- factor(design) comparisonmodel <- model.matrix(~0+design) colnames(comparisonmodel) <- levels(design) #Checking the experimental design design comparisonmodel project.fit <- lmFit(project.NormData, comparisonmodel) > #When I used > project.fit <- lmFit(project.NormData$M, comparisonmodel) > Error in array(x, c(length(x), 1L), if (!is.null(names(x))) list(names(x), : > 'data' must be of a vector type, was 'NULL' > #So, I used without a $M > project.fit <- lmFit(project.NormData,comparisonmodel) #Applying the empirical Bayes method project.fit.eBayes <- eBayes(project.fit) names(project.fit.eBayes) #Make individual contrasts and fit the new model CaseControl <- makeContrasts(CaseControl="D0A-Control", levels=comparisonmodel) CaseControl.fitmodel <- contrasts.fit(project.fit.eBayes, CaseControl) CaseControl.fitmodel.eBayes <- eBayes(CaseControl.fitmodel) #Filtering Results nrow(topTable(CaseControl.fitmodel.eBayes, coef="CaseControl", number=99999, lfc=2)) probeset.list <- topTable(CaseControl.fitmodel.eBayes, "CaseControl", number=99999, adjust.method="BH", sort.by="P", lfc=2) #To save results write.table(probeset.list, "results.txt", sep="\t", quote=FALSE) Well, now it seems correct, and you guys what do you think? Best regards, Leite
You have to pass a comparison model to `lmFit()`. Also, for Agilent processing, the expression values are eventually stored in the 'M' or 'E' variables of your object, and not accessed via the `exprs()` function. The choice of M or E depends on whether it is 2- or 1-channel / colour array. comparisonmodel <- model.matrix(~0+design) colnames(comparisonmodel) <- levels(design) comparisonmodel fit <- lmFit(project.NormData$M, comparisonmodel)
biostars
{"uid": 287206, "view_count": 5567, "vote_count": 5}
Hi there ! I am working on Ribosome Profiling data, and I want to use Python3 multithreading to speed up my program. To describe the scope of my question: I use python scripts that take as input alignment files like: python3 python_script.py <(samtools view file.bam) and I take it in charge in the script with the fileinput module (**import fileinput**). It works like a charm ! So related to this, do you think that it is safe to call 3 python scripts at the same time using the same file.bam (using the samtools view command) ? For instance: python3 script1.py <(samtools view file.bam) & python3 script2.py <(samtools view file.bam) & python3 script3.py <(samtools view file.bam) & I am not sure: if the samtools view command is called on the file.bam, can I safely read again this file with an other samtools view command ? The output is not a problem, every scripts will write a different file. I gave the example with 3 scripts, but I will certainly use 3 functions. Any kind of help of suggestion will be highly appreciated. Wocka
You can call samtools (or any other program) any number of times on the same BAM file without issue. You can also use pysam inside of python to open the same files multiple times within the same script without issue.
biostars
{"uid": 212143, "view_count": 1932, "vote_count": 1}
<p>This question is similar to this one (<a href='/p/8920/'>Get rs number based on position</a>).</p> <p>I have a text file with SNPs in the chr:position format</p> <pre><code> 10:71086 10:72876 10:75794 </code></pre> <p>I was wondering if there is an R package (or perhaps one in Bioconductor) that can take these as input values and provide the SNP rs number associate with them????</p>
<p>I think JC has already pointed out a nice resource. The other option would be</p> <p><a href='/p/8920/'>Get rs number based on position</a></p>
biostars
{"uid": 72066, "view_count": 44865, "vote_count": 6}
Hi All, I have a table with six columns (in the following). ![enter image description here][1] [1]: http://uupload.ir/files/xm5g_taible.png So, I want to delete SNPs that are less than number 10 in the SNP column. What is the best idea? Best Regard Mostafa
Read the file, and filter out any row where SNP column is more than or equal to 10: data <- read.table("SNP.txt", header = TRUE) data <- data[ data$SNP >= 10, ]) To exclude 1st BP column, and to add 20KB to BP column: data <- data[ , -3 ] data$BP <- data$BP + 20000
biostars
{"uid": 337161, "view_count": 1292, "vote_count": 1}
Hi everyone! I am trying to find the best way to make 2 boxplot for a specific gene from data found in a row for a subset of columns within data frame x. x dimensions are 634 by 128 columns Each row is specific to a gene, Column 1 has gene name, and I want to say look at gene in row#1 columns 2:48 data I want to include in one boxplot columns 49:128 data I want to include in another boxplot data frame looks something like this ``` gene accepted_hits_x1.bam accepted_hits_x1.bam etc.... 1 AARS1 -6 0 etc.... ``` I also want to be able to see each data point that makes up the boxplot plotted in the plot I am having a problem: I am running into the problem where my data (residual from mean ... meaning x value - mean) is a series of positive and negative values and it appears that with this plot it is excluding these negative values... ```r data <- unlist(subset(datavr, gene =="IGF1R", select=2:128)) news <- data.frame(data=data, factor=c(rep(1,47), rep(2,80))) news$data <- (log10(as.numeric(news$data)) + 1) g <- ggplot(data=news, aes(x=as.factor(factor), y=data)) g + geom_boxplot() + geom_point(color="purple", size=3) + xlab("A38-41 A38-5 ") + ylab("log10(Residual from Mean)+1") + ggtitle("IGF1R inside region") + theme(plot.title = element_text(face="bold")) ``` The problem is that it keeps giving me error saying that: ```r Removed 110 rows containing missing values (geom_point) ``` This could be that these values are negative so taking the log10(value)+1?
To make a boxplot with base graphics in R, you need to create a "factor" vector, which indicates which category each of your data points belong to: ```r f <- factor(c(rep("Group 1", 47), rep("Group 2", 80))) ``` Then call "boxplot" with the factor and data as arguments: ```r boxplot(f, as.numeric(dat[1, 2:128])) ``` Edit: Actually creating the factor is not even necessary in this case. You can just list the two data vectors as multiple arguments: ```r boxplot(as.numeric(dat[1,2:48]), as.numeric(dat[1,49:128])) ```
biostars
{"uid": 145153, "view_count": 8143, "vote_count": 1}
<p>Hi, I recently found very helpful <a href='http://cancergenome.broadinstitute.org/'>http://cancergenome.broadinstitute.org/</a> portal to find out which gene is mutated in which cancer, frequencies and vice versa. But I can't find out how to determine frequencies for the combinations of genes. For example, based on data from this portal PTEN is mutated in 7% of all cancers and in 30% of gliobastomas. Rb is mutated in 3% of all patients and in 8% of glioblastomas. But what will be the easiest way for non bioinformatician to find out the frequency of patients with both Pten and Rb mutation in general population, in glioblastomas and in other cancers?</p> <p>Thanks for your help! Andrey</p>
<p>You can get at this kind of information using the <a href='http://www.cbioportal.org/public-portal/'>cBioPortal for Cancer Genomics</a>.</p> <p>To get a relevant visualization:</p> <ol> <li>Go to <a href='http://www.cbioportal.org/public-portal/'>http://www.cbioportal.org/public-portal/</a></li> <li>Enter your gene names using their official HUGO designations: PTEN, RB1. One per line.</li> <li>If you are only interested in mutations, select the 'Only Mutation' option</li> <li>Hit the submit button</li> <li>Hover your mouse over the bar for your cancer type of interest and select 'click to view details'</li> <li>This will bring you to an 'OncoPrint' that shows which cases have mutations in: PTEN, RB1, both, neither </li> <li>Also check out the additional tabs of this view to explore the data for these two genes further. You can also refine your query using the 'Modify Query' section.</li> </ol> <p>Here is an example query for: <a href='http://www.cbioportal.org/public-portal/index.do?genetic_profile_ids_PROFILE_COPY_NUMBER_ALTERATION=&amp;Action=Submit&amp;genetic_profile_ids_PROFILE_MUTATION_EXTENDED=gbm_tcga_mutations&amp;case_set_id=gbm_tcga_sequenced&amp;Z_SCORE_THRESHOLD=2.0&amp;cancer_study_id=gbm_tcga&amp;gene_list=PTEN+RB1&amp;gene_set_choice=user-defined-list&amp;tab_index=tab_visualize&amp;'>RB1 and PTEN in GBM</a></p> <p>In this example, 31% have PTEN mutations, 9% have RB1 mutations, and 5% have both. According to the 'mutual exclusivity' tab there is a highly significant tendency tendency toward co-occurrence of mutations in these mutations (odds ratio: 3.875, p-value: 0.001463)</p>
biostars
{"uid": 94192, "view_count": 4914, "vote_count": 2}
Hello, My question is how to predict one connected protein structures from two separate known structures? It should be different from the protein-protein docking method, which is non-covalent interactions. My question is that the two proteins should "contently" link together. Is that possible first manually link them (two 3D structure) together using some software, and then do the energy minimization? Any suggestions are appreciated!
What you want to do is possible, but can't be done reliably without external restraints. Protein modeling typically means satisfying a number of angle and distance restraints. If you add a linker connecting two domains of known structures, there will be no restraints specifying their relative orientation unless you add them manually. Energy minimization will help you only if you are already close to the optimal domain interaction, but it isn't meant for making proteins move around a lot. There are many recent methods predicting intra-protein contacts between protein residues based on co-evolution. Those approaches can be extended to inter-protein contacts as well, and may provide some restraints for your modeling procedure. - https://academic.oup.com/nar/article/46/W1/W432/5001161 - https://link.springer.com/protocol/10.1007%2F978-1-4939-9873-9_6
biostars
{"uid": 434597, "view_count": 799, "vote_count": 1}
Hello, My name is Maria. I'm a master's student from Estonia. I want to ask some basic (okay, maybe a little bit silly questions). I'm starting to work with 1000Genomes data and I've never worked with genome sequences before. I want to download sub-sequences of a genome. The instruction says to indicate it like 1:1-50000. I understand that 1 in front of : refers to chromosome number, is that correct? And 1-50000 would be the first 50000 nucleotides? Are the genomes of different people of different lengths due to copy number variations? Does the sequencing according to a reference genome take account of these differences or would all the genomes in 1000Genomes be of the same length as they are aligned to a reference genome? What if I wanted to obtain specific genes from the sequences? Is there any tool to do that? Thank you!
> I want to download sub-sequences of a genome. The instruction says to indicate it like 1:1-50000. I understand that 1 in front of : refers to chromosome number, is that correct? And 1-50000 would be the first 50000 nucleotides? Yes, you are correct > Are the genomes of different people of different lengths due to copy number variations? Yes. Indels will also contribute to differences in genome lengths. > Does the sequencing according to a reference genome take account of these differences or would all the genomes in 1000Genomes be of the same length as they are aligned to a reference genome? Reads from different genomic samples are aligned to the same reference genome so that multiple genomes can be easily compared to each other for the presence/absence of a genomic variant. Otherwise it would be tough to carry out any comparisons. In short, you can say that all the genomes in 1000 Genomes are of same length w.r.t to the coordinate location of a variant/gene. Though bam files have enough information to predict copy number variants and identify insertions and deletions differing between individuals. > What if I wanted to obtain specific genes from the sequences? Is there any tool to do that? You can download coordinates of your gene of interest from Ensembl (gtf file) or UCSC genome browser (gtf/bed) and then use those coordinates (for e.g. chr2:100000-1020000) to fetch the reads overlapping that region from the bam file. You can use samtools view function.
biostars
{"uid": 131557, "view_count": 1694, "vote_count": 2}
I would like to add read group info (`-R`) during the mapping/alignment stage as part of my variant calling gatk pipeline. I am doing something like this: bwa mem \ -M \ -t 8 \ -v 3 \ -R <(sh a-illumina-read-group.sh $1) \ "$path_dr_bwaindex_genome" \ $1 $2 where the read group string is generated automatically from the fastq file by the shell script `a-illumina-read-group.sh`. It produces a string like: `'@RG\tID:ST-E00215_230_HJ3FMALXX_2\tSM:ST-E00215_230_HJ3FMALXX_2_ATCACG\tLB:ATCACG\tPL:ILLUMINA'` but, when I run bwa, it fails with this error: `[E::bwa_set_rg] the read group line is not started with @RG` I have tried excluding the single quotes (`''`) around read group info, but that didn't change anything. I also tried variations in how the variable is passed. `-R=$(sh a-illumina-read-group.sh $1) \` `-R=$(echo $(sh a-illumina-read-group.sh $1)) \` Just as a test, I tried: `rg='@RG\tID:ST-E00215_230_HJ3FMALXX_2\tSM:ST-E00215_230_HJ3FMALXX_2_ATCACG\tLB:ATCACG\tPL:ILLUMINA'` `-R <(echo $rg) \` But, they all produce the same error. I would appreciate any solutions to this issue. I understand that I can add read groups later on using PicardTools `AddOrReplaceReadGroup`, but I thought it might be convenient doing it here in one step. Thanks.
What worked for me was to read the read group information from the fastq file during the mapping run. My read name in the fastq file looks like this: `@ST-E00274:188:H3JWNCCXY:4:1101:5142:1221 1:N:0:NTTGTA`. The bash file looks like this: #!/bin/bash header=$(zcat $1 | head -n 1) id=$(echo $header | head -n 1 | cut -f 1-4 -d":" | sed 's/@//' | sed 's/:/_/g') sm=$(echo $header | head -n 1 | grep -Eo "[ATGCN]+$") echo "Read Group @RG\tID:$id\tSM:$id"_"$sm\tLB:$id"_"$sm\tPL:ILLUMINA" bwa mem \ -M \ -t 8 \ -v 3 \ -R $(echo "@RG\tID:$id\tSM:$id"_"$sm\tLB:$id"_"$sm\tPL:ILLUMINA") \ "$path_bwaindex_genome" \ $1 $2 | samblaster -M | samtools fixmate - - | samtools sort -O bam -o "mapped-bwa.bam" And the bash file is run as bwa-mapper.sh read_1.fq.gz read_2.fq.gz You can remove this part (`| samblaster -M | samtools fixmate - - | samtools sort -O bam -o`) and replace with `>` if you don't need it. `samblaster` marks duplicates like Picard. I don't remember what `fixmate` does. The sort part sorts the SAM file and generates an output BAM.
biostars
{"uid": 280837, "view_count": 15579, "vote_count": 7}
Hi, I'm new to both proteomics and biostar and would like to start with maybe a simple, but a more general question. I know that for RNA-Seq, people just assume normality to the data mostly, because to achieve normality with such a small number of samples is always risky and difficult. I was wondering how this is handled in the proteomics field. What kind of tests one might use to analyze the data, when a t-test is not applicable due to the non-normal behavior of the data? What alternatives do I have there? I saw that some people are using LIMMA. Can LIMMA handle non-normal distributed data? thanks G.
When using `limma` in proteomics/metabolomics, the data are usually log-transformed first, which is usually makes it normal/more homoscedastic (or assumed to be so). (Example [paper][1]). By the way, people do not usually assume normality for RNA-seq data. They usually model raw counts with negative binomials (e.g. `DESeq2`, `edgeR`) or do some other transformations (e.g. `limma-voom`). [1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5418186/
biostars
{"uid": 9491852, "view_count": 860, "vote_count": 1}
Hi there, I'm new in bioinformatics tools and I need help. I tell you what I have done :) I have an assembly from Trinity and I wanted to know the CG content of each assembly so I generated a file with this information and I imported to R. Then, I calculated the CG content of all my contigs in R by doing: ## load data data= read.table("CG_content_contig.txt", header = T) ## Create a new variable called CG data$CG <- (data$C+data$G)/(data$A+data$C+data$G+data$T) ## Create a file with CG content > 23% CG_content_more_than_23 <- data[which(data$CG > '0.23'), ] ## Create a file only with the names of the contigs name_contigs <- CG_content_more_than_23$chr ## Download .txt file. write.table(name_contigs, file="name_contigs.txt", row.names=FALSE, sep='\t') As you see, I have created a file with those contigs with an CG content higher than 23%. I have uploaded this file (name_contigs.txt) to my server in order to work with it. It looks like this: TRINITY_DN134693_c0_g1_i1 TRINITY_DN109669_c0_g1_i1 TRINITY_DN109679_c0_g1_i1 TRINITY_DN114999_c0_g1_i1 TRINITY_DN114910_c0_g1_i1 I have a .fq file that looks like this: >TRINITY_DN134617_c0_g1_i1 AATAAAAATAAATAAAAATCAATAAAAATATTATAATACAATATAATATAAAATAATATAAAAATTCTACAATAAGAATAAAGTATAATTTTTTAGATTATAAGAGGATATGTTAATACATAGTATTCTGTTTGTTATTGTAGAAAAAACATACAGAAACTTTTTGTATATATAGTCTCATTTTATATATATAAATAAAAATGAACATTAATGAAATGAAATTAAGAGTCGTTTTATTAAAAATAGCTATAAAAAATAACAACA >TRINITY_DN134643_c0_g1_i1 GCATGGTAGTAAAGTATAATGACATAGCAAAAATATTTAAAATAAAAAAAAATTACTATTATAATTTTTTCTGTATAACATAAACGTTTTTAATGATATTATATTAATTACATATAAAAATAGCATAATAAAAATATTTAGTTATAAAATTTATTATTTTATTTTTTTTTTTTTGTTATATACTTTCTCAGAACATTAATTTGTCATCAGTTCTATTATATTGATAAACTATTCAATTGCTTTAATA What I want to do is to keep only the contigs that are NOT in the .txt file. Maybe I should create a pipeline with grep command?
You should use `faSomeRecords` [utility from Jim Kent][1] (UCSC, linux binary linked). Add execute permissions after download `chmod u+x faSomeRecords`. faSomeRecords - Extract multiple fa records usage: faSomeRecords in.fa listFile out.fa options: -exclude - output sequences not in the list file. [1]: http://hgdownload.soe.ucsc.edu/admin/exe/linux.x86_64/faSomeRecords
biostars
{"uid": 360658, "view_count": 714, "vote_count": 1}
Hi everyone! Sorry for this noob question, but I just wanted to ask, what is the best practice in the field for publishing RNA sequencing data analysis scripts? Does it usually happen before the publication, or after? Currently I kept the repo as private and the paper is not published yet.
I would strongly recommend publishing them before the paper. That way, you can get your code peer reviewed before you submit your article. That may improve your code. You may have to arrange reviewers yourself for this (maybe an idea for a new BioStars section?).
biostars
{"uid": 9479724, "view_count": 815, "vote_count": 2}
Hello, I'm using deeptools computeMatrix/plotHeatmap, and am getting a large amount of missing data (i.e. black in the graph). Commands used ---------- The call to computeMatrix takes in ChIPseq peaks, and corresponding signal in bigWig format: **Command 1: computeMatrix** computeMatrix reference-point --referencePoint center -S file1.bw file2.bw file3.bw file4.bw -R peaks.bed -a 500 -b 500 -o matrix.gz *Note: actual filenames, -p option are omitted* *Note: --binSize is omitted or 1* This is followed by: **Command 2: plotHeatmap** plotHeatmap -m $infile -o $ofile --outFileSortedRegions ${ofile/.$outformat/.bed} --sortUsing mean --outFileNameMatrix ${ofile/.$outformat/.mat} --zMin 0 *Where: infile=matrix.gz (output from first command); ofile=matrix.$outformat; outformat typically equals "png"* Output ---------- I have verified that the data are indeed not missing, in at least the first few examples. See the highlighted area of the attached graph (red box). ![plotHeatmap screenshot][1] Here is a chunk of the corresponding data stored via the --outFileNameMatrix option: > 1.256 1.256 1.256 1.256 1.256 1.256 1.256 1.256 1.256 1.256 1.346 1.346 1.346 I've validated these numbers by looking specifically at this region of the input file (equivalent to ***file1.bw*** in **Command 1**) <pre>#converted to wiggle track type=wiggle_0 fixedStep chrom=chrII start=2378124 span=1 step=1 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.25641036 1.34615386 1.34615386 1.34615386</pre> *Note: this matches the above matrix output when --binSize is set to 1 for **Command 1: computeMatrix**; the default setting (10) gives the expected results.* This value range should be printed as yellow according to the figure legend (not in screenshot), and represents the middle range of the data being plotted. Also, adding the computeMatrix parameter --missingDataAsZero, you would expect all of that black to become red, right? That is not reflected in this plot: ![plotHeatmap with black missing data versus zeroes][2] ## Question ## Does anyone know what might be going on here? Am I running the commands incorrectly (newb)? [1]: https://s8.postimg.cc/xjbyknr3p/plot_Heatmap_screenshot.png [2]: https://s8.postimg.cc/kqe8xoyxx/Na_Ns_As_Black_Versus_Na_Ns_As_Zero.png
Every time this issue is brought up it turns out to be due to the interpolation done in order to make a large matrix fit into a much smaller number of pixels. Try doing one of the following: 1. Make the image larger so there's less interpolation. 2. Increasing the DPI 3. Changing the `--interpolationMethod` option I have yet to see an example where that doesn't produce the results you're expecting. At the end of the day this comes down to how best to compress a lot of information into a small image. At some point you have to throw information away since there aren't enough pixels to represent everything, so no particular method will always produce the most desirable results.
biostars
{"uid": 322414, "view_count": 5339, "vote_count": 4}
hi, I have a long list of different miRNAs like below hsa-miR-7641 bta-miR-2904 hhi-miR-7641 hsa-miR-4454 bta-miR-2478 efu-let-7c mmu-miR-6240 hsa-miR-7704 is there any way to avoid searching one by one in mirbase and copy pasting their mature sequences and taking their sequences at the same time?
It looks like you can download sequence files: http://www.mirbase.org/ftp.shtml "Fasta format sequences of all miRNA hairpins" & "Fasta format sequences of all mature miRNA sequences" When you have those files, a simple `grep -f <youridentifiers.txt> <mirbase.fasta>` would do the trick
biostars
{"uid": 215813, "view_count": 3111, "vote_count": 3}
I'm using the equation Num_Reads * Avg_Read_Length / Genome Size To calculate coverage. Here are my questions 1. Should I only consider mapped bases in the Query to calculate the read length? Specifically, `M`,`=` tags in the CIGAR string? For example, if a read has `60M10S` for a CIGAR string, would the read length be 60 since 10 were not aligned properly? 2. Given 1. why is it that my calculation of coverage differs from `samtools mpileup`? Wouldn't the average of column 4 in mpileup output equal the average coverage? Thanks for any help.
1) yes M , = and even X 2) yes , ignore the clip 3) did you consider some options like: -Q, --min-BQ INT skip bases with baseQ/BAQ smaller than INT [13] --ff, --excl-flags STR|INT filter flags: skip reads with mask bits set [UNMAP,SECONDARY,QCFAIL,DUP] what's the output of `samtools depth`
biostars
{"uid": 250855, "view_count": 2715, "vote_count": 4}
Hi all, I think GATK is a great toolbox. There a quite a few steps involved and I was wondering on the impact and importance of joint genotyping - in particular when working with very small sample sizes (around 10 -15 samples). I read that it can lead to a loss on unique snps, which we would be particularly interested in. Has anymore experience with this and can tell me about the effects on the outcome? In particular in quantitative terms, do I gain many more overlapping SNPs - do I loose a lot of unique SNPs? Or is the impact negligible? Thanks for your input!
There is a great answer provided previously: https://www.biostars.org/p/10926/ To summarize: > joint calling has biased false negative rate: it does better if a SNP > is shared between samples but worse if it is a singleton
biostars
{"uid": 284097, "view_count": 4973, "vote_count": 3}
Does anyone know if it is possible to specify which assembly to use when constructing a query for Entrez ? For example, if I do such a query with EDirect: esearch -db gene -query "brca1 [ALL]human[ORGN]" -sort "relevance" | \ efetch -format docsum | \ xtract -pattern DocumentSummary -element Name MapLocation Description OtherAliases Id \ -block GenomicInfo -element ChrLoc ChrAccVer ChrStart ChrStop the GenomicInfo that I get seems to be according to the GRCh38 assembly. The application that I'm developing needs to use the GRCh37 assembly, however. Any help is much appreciated.
Actually, it turns out that this can be done. I wrote to NCBI and got a response. The key is to look in the **LocationHistType** block for: - a specific annotation release (see <a href="http://www.ncbi.nlm.nih.gov/genome/guide/human/release_notes.html" target="_blank">here</a> and <a href="http://www.ncbi.nlm.nih.gov/news/08-27-2013-human-annotation-105/" target="_blank">here</a> for some explanation of annotation releases). For example, GRCh37.p13 is coded by NCBI as Annotation Release *105*. - the corresponding assembly accession, which is a RefSeq Assembly ID. For GRCh37.p13 it is *GCF_000001405.25* (see <a href="http://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.25/" target="_blank">here</a> for more information). The EDirect commands should look something like this: esearch -db gene -query "brca1 [ALL] AND human [ORGN]" | \ efetch -format docsum | \ xtract -pattern DocumentSummary \ -element Name MapLocation Description OtherAliases Id ChrLoc \ -block LocationHistType \ -match "AnnotationRelease:105" -and "AssemblyAccVer:GCF_000001405.25" \ -element ChrAccVer ChrStart ChrStop
biostars
{"uid": 98483, "view_count": 4224, "vote_count": 2}
Hello to all and I hope you are all doing well. I am a biology student who's doing a study on bird gut microbiota analysis thorugh 16S rRNA. I collected 3 biological samples per time point (we have five different time points) and I want to see the changes in bird's gut microbiota across those five time points. However, one of the time points only has two biological replicates as the other birds did not have gut contents for me to extract. I have extra birds per time point in case anything happens on the original biological replicates. However, for this particular time point, all the extra bird's and one of the replicate's gut were empty so I'm stuck with only two workable biological replicates for this time point. Now I am having a concern since most microbiome studies use a minimum of biological triplicates and I've read in many statistical reviews that higher biological replicates is necessary to reliably infer the average or variation in a microbial population. I'm wondering if there's a way for me to statistically justify this discrepancy in biological replicates or a way for me to proceed the analysis of this one, or is there any feasible way for me to analyze samples with uneven biological replicates? Thank you so much and apologies if I'm using statistical terms wrong as I am not completely versed in the field. Thanks again and regards.
The good news is that this is a common problem, and if you have no way of obtaining additional data, I don't think you'd be faulted for just analyzing what you have. Review may be difficult if you're looking to publish formally, but you should still analyze it and explain what happened. Simple methods like t-tests can still be applied for comparison between n=3 and n=2 groups. However, there are more correct ways of handling microbiome missing data and timeseries analysis than doing pairwise comparisons. The bad news is that these methods are generally high-level and require some programming knowledge to apply. An example would be [this publication][1], in which they discuss the problem you're having with missing data. The software they use can be found [here][2]. In general, microbiome data (and 16S in particular) are very noisy, and it often takes very large samples sizes to make biologically-relevant inferences. It is great that you have done timeseries though, as it will improve your statistical power by a lot. Typically, there is high variability between individuals' microbiomes, but collecting data over time lets you model the variability within each individual's microbiome as well as between them. Probably the best example of a high-powered microbiome time-series study is the [HMP2 publication][3], but even with a very large sample size, they weren't able to say much biologically speaking. So don't be disappointed if the analysis doesn't find much (my first 16S study didn't); it is still valuable to analyze and share it, since it contributes to our global database of 16S data. [1]: https://academic.oup.com/bioinformatics/article/34/3/372/4157442 [2]: https://github.com/tare/GPMicrobiome [3]: https://www.nature.com/articles/s41586-019-1237-9
biostars
{"uid": 9497468, "view_count": 578, "vote_count": 1}
Hi Biostars: I am facing a problem with differential expression analysis, were due to the intrinsic features of the samples we can't have replicates in one condition. **Study design:** We have a patient with a very rare translocation , no other similar translocation has been described in the world and we expect that expression of the genes surrounding the translocation is altered. Hence, we have perform RNA-seq of the translocated patient and 4 controls. **How should I proceed?**: Before you kill me for asking "Can I do differential expression without replicates?", I known that [EdgeR][1] and [DEseq2][2] provide ways to proceed without replicates , and that [NOISeq][3] could be used without replicates. However, in this case we have replicates for the controls were the biological variance of each gene could be estimated, but no for the the patient, as there is no other individual in the world. So, **my question is:** Is there any way of estimating the variance for the control group, and then compare to the expression in one single sample, the patient in this case? Or better, do a dispersion estimation for the translocation patient? **my options** (From EdgeR docs): - Use the genes and transcripts that are far away or in other chromosomes than the chromosome with the translocation to estimate the dispersion of that sample - Use a dispersion value defined previously. Have any of you faced a similar problem, and, furthermore, have anybody tested how do the two options above mentioned that EdgeR provide for working without replicates perform? thanks for reading :) [1]: https://www.bioconductor.org/packages/devel/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf [2]: https://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html [3]: https://www.bioconductor.org/packages/release/bioc/vignettes/NOISeq/inst/doc/NOISeq.pdf
The short answer is that you just proceed as usual. limma, edgeR and DESeq2 have no trouble with this scenario, although the edgeR quasi-likelihood pipeline would be better than the other options. The packages simply estimate variability from groups where you do have replication (controls in your case), and apply the same dispersion estimates to all the samples in all the groups. The edgeR and DESeq2 pipelines for no replicates are for when *none* of the groups have any replicates. You however do have replicate controls. edgeR can be used right down to a two-group comparison with n=2 in one group and n=1 in the other. I'm not saying that such small sample sizes are desirable, but the package will do the best it can with what it gets and will present scientifically defensible results even in that extreme scenario. You can see an example of an n=2 vs n=1 analysis in the discussion to this paper (i.e., my reply to Conrad Burden's first report): https://f1000research.com/articles/5-1438 BTW, the same question has been asked several times on the Bioconductor Support forum, for example: https://support.bioconductor.org/p/63585/ or https://support.bioconductor.org/p/61904/
biostars
{"uid": 266211, "view_count": 5010, "vote_count": 2}
<p>Hi :)<br /> <br /> A few weeks ago I was looking for a tool that would help me get &quot;DNA composition&quot; statistics for my sequencing. Something that would give me a dataset which which I could ask questions about GC bias, or over-represented sequences, motifs, etc. There are tools to answer each question specifically, but I was looking for something more general from which many analyses could be built on. This led me to k-mer counting, and all the down-stream tools which leverage k-mer result files.<br /> <br /> k-mer counting tools are pretty cool, but all the ones I tried had some draw backs; like high RAM requirements, very long run-times (although the latests bloom-filter based tools seem to mitigate this somewhat), but most importantly requiring a specific k-mer size to be chosen. I really wanted all mers in the dataset so I could look at &#39;&#39;GC&#39; and &#39;GCG&#39; and &#39;GCCGACGGACGAC&#39; without having to re-run any analyses. I couldn&#39;t find a tool like this after a brief search, so gave up and wrote my own in NumPy based off suffix-arrays.<br /> <br /> Two weeks later I have a functional program in the sense that I get results, but before I invest any time making it usable for others, I thought I should investigate further if there are tools which already do this. Making a suffix array was a nice learning experience for me so I haven&#39;t lost anything if such tools already exist - and if they do i&#39;d love to compare performance characteristics - but if not I might consider tidying up the code and making proper documentation. Does anyone know of such tools?<br /> <br /> Thank you so much, and happy Diwali :)</p>
<p>This is an interesting question because even though there are dozens of k-mer tools available they do all seem to have the same limitations: very short k-mer length requirements (usually &lt;32bp) and fixed length search. It seems like there are some new tools reporting performance improvements but they still seem to have those features. Tallymer (part of <a href="http://genometools.org">GenomeTools</a>) is the only tool, to my knowledge, that can compute statistics for an arbitrary length (or range) of k-mers. It sounds like you want the &#39;occratio&#39; subcommand, which runs for a range of k-mers (as opposed to the &#39;mkindex&#39; command which runs for a fixed length). Here is an example to get a range of unique/nonunique k-mers and get the relative amounts.</p> <pre> gt tallymer occratio -output unique nonunique -minmersize 10 -maxmersize 180 -esa db gt tallymer occratio -output unique relative -minmersize 10 -maxmersize 180 -esa db</pre> <p>Note, you have to create &#39;db&#39; first, but this was just an example. I can&#39;t go into the details, but I know that Tallymer uses a data structure called an enhanced suffix array. This is by the same author of Vmatch, and MUMmer (and many other tools). I should mention that Vmatch is also amazingly flexible for doing k-mer arithmetic and we could probably come up with an equivalent command using that tool. I agree with you about learning by doing, but this is a problem I would personally not try to solve since there are great tools available that allow you to answer virtually any question you may have about genomes and k-mers.</p>
biostars
{"uid": 165465, "view_count": 6989, "vote_count": 6}
Have anyone use Araport11 genome annotation file to replace TAIR10 genome annotation file?Because Araport11's GTF was released on 2016 ,which is newer than TAIR's GTF. And Araport11 also provide gene name,like "gene-id:AT1G01060; gene name"LHY" " I want to ask whether Araport11 is better for use as genome annotation file?
that would be up to you to decide. The annotation provide by Araport11 is indeed newer than Tair10. They should be nonetheless interchangeable, however general rule is to use the one from which you also use the genomic sequence (though the genome for TAIR10 and Araport11 should be the same). If Araport 11 suits your needs better then you can safely go for that one.
biostars
{"uid": 360710, "view_count": 2542, "vote_count": 1}
I have many (>100) bed files in this format file1 chr1 1 2 2 chr1 10 11 3 chr1 50 51 4 file2 chr1 1 2 10 chr1 10 11 8 chr10 2 3 8 fileN chr1 1 2 1 chr1 50 51 2 chr10 2 3 9 where the files have some sites in common, but not all of them. My goal is to obtain a file that looks like this fileN chrm start end file1 file2 fileN chr1 1 2 2 10 1 chr1 10 11 3 8 NA chr1 50 51 4 NA 2 chr10 2 3 NA 2 9 I want to be able to do this efficiently for all 100 files that I have. I was thinking about using bedtools but don't want to use many many lines of intersections. The files are quite large as well. Does anyone have a suggestion on how to do this? Thanks!
[BEDOPS][1] `bedops` and `bedmap` are efficient in memory and time on whole-genome scale files. These tools also work with standard input and output streams, which allows chaining of operations and integration of set operations with standard Unix tools. These features make work on hundreds of files tractable. **Method 1** If you can get away without the `NA`, given [sorted][2] single-base BED files `A.bed`, `B.bed` through `N.bed` (N=100 or whatever), you could take their union and their merge (these are all stored under the directory `files`): $ bedops -u files/A.bed files/B.bed ... files/N.bed > union.bed $ bedops -m files/A.bed files/B.bed ... files/N.bed > merge.bed Or if you want to work on all the files in this directory, more simply: $ bedops -u files/*.bed > union.bed $ bedops -m files/*.bed > merge.bed Then map the IDs in the union file to the intervals in the merge file: $ bedmap --echo --echo-map-id --delim '\t' merge.bed union.bed > answer.bed The file `answer.bed` would look like: chr1 1 2 1;10;2 chr1 10 11 3;8 chr1 50 51 2;4 chr10 2 3 8;9 The `--multidelim '\t'` option could be added, but using the default semi-colon delimiter may make it easier to demonstrate the result. In this approach, there are no NA values and ID values are presented in lexicographical order, not the order of original input files. **Method 2** If you need an `NA` value positioned in order from columns 4 through N+3, where ID values are missing, then one option is to loop over all N BED files to generate N per-file maps: $ for fn in `ls files/*.bed`; do echo $fn; bedops -n 1 merge.bed $fn | awk '{ print $0"\tNA" }' | bedops -u - $fn | cut -f4 > $fn.map; done The pipeline of `bedops` commands here uses `merge.bed` to fill in gaps where intervals are missing, adding `NA` as the ID value for missing intervals before doing the final mapping step with `cut`. At the end, each of the per-file maps is a column in the result you ultimately want. Finally, use `paste` to glue all these columns together into an N+3 column matrix: $ paste merge.bed files/*.bed.map > answer.mtx The file `answer.mtx` will contain the intervals in the first three columns, and ID values from columns 4 through N+3, for input files `A.bed` through `N.bed`. It'll look like this: chr1 1 2 2 10 1 chr1 10 11 3 8 NA chr1 50 51 4 NA 2 chr10 2 3 NA 8 9 Basically, this is the result you want, but without the `chrm ... fileN` header at the top. Unix pipelines demo'ed in this answer will make all of this work go faster, by avoiding generating intermediate files as much as possible. If you have access to a compute cluster and each file is very large, or you have many hundreds of files, and you need to automate this process, each iteration or step in the serial per-map loop can be parallelized as a separate job on the cluster. Then you have a final, "dependent" job that does the `paste` step at the end, in [map-reduce][3] fashion. [1]: https://github.com/bedops/bedops [2]: http://bedops.readthedocs.io/en/latest/content/reference/file-management/sorting/sort-bed.html [3]: https://en.wikipedia.org/wiki/MapReduce
biostars
{"uid": 320196, "view_count": 3198, "vote_count": 2}
samtools flagstat results: some number + 0 in total (QC-passed reads + QC-failed reads) my this "some number" does not match actual number of reads in my paired fastq file. Is this expected?
The number of reads reported is the number actually in the file. If your aligner produced secondary alignments then this will often be higher than the original number in the fastq files.
biostars
{"uid": 193703, "view_count": 6461, "vote_count": 1}
Here is an example of volcano plot produced using edgeR results on DEG analysis. So my question is why there is no values in the middle of plot (indicated by red circle)? ![Volcano plot][1] [1]: https://i.pinimg.com/originals/07/69/3d/07693dfcb3c8fe55fcae39d7caa64239.jpg
This is because there is a relationship between p-value and fold change. With no fold change (log2FC = 0) you have no significant difference hence p-value of 1 which is zero after -10log. With higher FC you can get lower p-values (hence higher -log10 values).
biostars
{"uid": 335901, "view_count": 6979, "vote_count": 2}
Hi, there I downloaded RSEM expected_count data of TCGA TARGET GTEX from UCSC Xena to perform differential expressed analysis. I noticed the unit is log2(x+1), but I failed to turn the data into integers. The input of DESeq2 should be a matrix with integers. Any suggestions? Thanks in advance! Best, Issac
log2(x+1) data is not suitable for analysis with DESeq2 (nor edgeR/voom). So you would at least have to convert it back to counts by doing (2^x) -1. You could then simply round to the nearest integer. Alternatively you can download un-tarnsformed RSEM output from Firehose at https://gdac.broadinstitute.org/
biostars
{"uid": 382643, "view_count": 2443, "vote_count": 1}
Dear all, I am new to the field. I am trying to analyze single end 100b FastQ files with ~70million reads/sample. I am trying to determine if adapter sequences are present and if so how to go about them. I ran FastQC on the files and reports show they each have an "overrepresented sequence" of an "illumina index adapter" in them. <a href="https://ibb.co/R6N3Nr5"><img src="https://i.ibb.co/pKQfQVm/sample1.png" alt="sample1" border="0" /></a> I have the following questions: 1. Does sample1 look like a trimmed file or it requires adapter trimming? 2. If further trimming is recommended what would be the best seq/adapter option to be used for cutadapt/TrimGalore? [See below for my thoughts so far] 3. Based on the FastQC report, do I need to worry about presence of any other adapter sequences beside the index? My thoughts on question 2: The sequences for illumina index adapter format appear to be: GATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG These are the adapter sequences found in my FastQC report for sample 1: GATCGGAAGAGCACACGTCTGAACTCCAGTCACCATGGCATCTCGTATGC AGATCGGAAGAGCACACGTCTGAACTCCAGTCACCATGGCATCTCGTATG I am thinking of using below options for cutadapt/trimgalore to remove the adapter(s): trim_galore sample1.fastq.gz -a GATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG -q 20 --length 20 –fastqc However, it seems that trimmomatics for instance only takes care of the initial sequence of the index adapter (only up to Ns and not after): https://github.com/timflutre/trimmomatic/blob/master/adapters/TruSeq3-SE.fa Many thanks for your time and reply beforehand.
The sequence to trim if having Universal Adaprer contaminations is `AGATCGGAAGAGC`. In you case (0.5%) I would not even bother and directly align the files without any manipulations.
biostars
{"uid": 426875, "view_count": 5307, "vote_count": 1}
<p>Hi,</p> <p>I&#39;m new to discoSNP (using 2.2.1).</p> <p>It seems that the script simple_test.sh was not designed for this version.<br /> <br /> the command line using fof.txt works fine.</p> <p>I think you should remove simple_test.sh from the package., it&#39;s confusing.</p>
Hi Sebastien, Thanks for this message. Indeed, simple_test should not be part of the release. It won't appear in the 2.2.2 release. Pierre
biostars
{"uid": 166295, "view_count": 1160, "vote_count": 2}
<p>Hellow I have a question on how to calculate fold changes when analyzing gene expression changes between multiple tumor and control samples per gene?</p>
<p>Or the bioconductor limma package if you are dealing with arrays and/or RNA-Seq to analyze your data</p> <p>Limma will give you the log2 expression changes based upon statistical values</p>
biostars
{"uid": 140642, "view_count": 120841, "vote_count": 7}
I have five SRR files for one sample (five runs, Single read - Illumina experiment). What are the steps to be taken before further analysis? I am converting this files to fastq format. Should I merge this five fastq files before mapping to the genome or map them separately. I am new to NGS and don't fully understand what "runs" means. Should I understand them as five independent experiment repeats and analyse separately?
Hi Most mappers accept multiple fastq files, so no need of merging them as long as read1 and read2 files are properly supplied. It is also best to retain the original header line of the fastq (-F argument to fastq-dump). See here https://edwards.sdsu.edu/research/fastq-dump/ . The header can be important for FastQC and other downstream applications. SRA classifies as Sample (SRS/ERS) -> Experiment (SRX/ERX) ->Data (SRR/ERR). SRR* files are the resulting data files of a particular experiment (SRX) run on a particular sample (SRP). Running same experiment on the same sample multiple times can result in multiple SRR files.
biostars
{"uid": 205682, "view_count": 4109, "vote_count": 2}
<p>Hi, i want to phase my data in vcf format using impute2&#39;s &quot;-phase&quot; option. So i tried to convert vcf file with plink:</p> <pre> plink --vcf data.vcf --recode oxford --out converted_data </pre> <p>But produced .map file does not conform to the required three-column format.<br /> Is there any other way to convert vcf to impute2 format?</p>
Can you clarify what the problem is? `--recode oxford` should produce a .gen file and a .sample file; this command shouldn't generate a .map file at all.
biostars
{"uid": 161426, "view_count": 4487, "vote_count": 1}
I have RNAseq data on B. subtilis, and mapped it to the genome so I have gene names and even BSU numbers for all of the genes. So far so good. Now I am struggling to find and assign functional annotations to each, preferably GO annotation. Despite B. subtilis being a model organism, it is not included in the GO consortium database; they only have E. coli. KEGG and bsubcyc both have B. subtilis functional annotations, but I cannot find a way to download any flat file from either site that associates each gene with its functional assignments. How do people do this? What set of bioinformatics skill am I totally unaware of but that seem to be taken for granted? Do people "webpage scrape" for this info from KEGG? Is there a database format I need to learn?
<p>The most straight-forward approach to obtaining GO annotation would in my opinion be to fetch it from UniProt. You can do that in several ways, for example, by querying for it via their REST API:</p> <p>http://www.uniprot.org/uniprot/?query=taxonomy:224308&amp;columns=id,go-id&amp;format=tab</p>
biostars
{"uid": 155842, "view_count": 2606, "vote_count": 2}
I made 3 binary files from MAP and PED files. <br> I'd like to know how to get individual genotypes per specific SNP. <br> When I open the .bim file, I just see the SNP and allele information but individual information (i.e. sample Id) How to map the information? I'd like to get information like this, and could open in Python or R. Do I need some other tools for that? please let me know. For example, SNP SAMPLE GENOTYPE rs53576 1111 AA rs53576 1112 GA rs53576 1113 GG rs53576 1114 GA Thank you in advance.
plink --bfile <Bed file prefix> --extract <File contain SNP ID> --recode --out <name of output> This should generate the desired output
biostars
{"uid": 460548, "view_count": 1128, "vote_count": 1}
Hi everybody, can anyone guide me from where to download the hs37d5.fa genome merged with phiX genome? I have downloaded few BAM files containing the tag in header section @SQ SN:PhiX LN:6386 together with other hs37d5/fa contigs. I am looking for which version of the reference sequence was used for these files. any help would be appreciated! Regards (Note: I am aware of phiX genome is used perhaps a control in illumina sequencing.)
If you want *exactly* the MPI-EVA version (which includes a circularised phiX and extended reference mtDNA) the construction is similar to the reference used by the 1000 Genomes Project (compare ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/technical/reference/), with minor changes. It's made as follows : 1 Download individual chrs from ensembl ftp (just like 1000g) ftp://ftp.ensembl.org/pub/current_fasta/homo_sapiens/dna/ 2a Download the newer version of the mitochondrion (NC_012920, just like 1000g) http://www.ncbi.nlm.nih.gov/nuccore/251831106 2b Copy the first 1000bp of the mitochondrion onto its end. The resulting sequence is named "MT". 3 Download the concatenated decoy sequences from 1000 Genomes: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5cs.fa.gz Also compare their READMEs: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/README_human_reference_20110707 ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.slides.pdf 4 Download the Human herpes virus (NC_007605, aka EBV) from NCBI, just like 1000g. The sequence is then named "NC_007605". http://www.ncbi.nlm.nih.gov/nuccore/NC_007605 5a Download phiX-174 reference (NC_001422). http://www.ncbi.nlm.nih.gov/nuccore/NC_001422 5b Copy the first 1000bp of phiX onto its end, name the result "phiX". 6 Create a reference (whole_genome.fa) with chrs 1-22, X, Y, extended NC_012920 MT, the non-chromosomal supercontigs, the NC_007605 EBV, the decoy sequences (hs37d5), extended phiX. The order is chosen to match 1000 Genomes (plus phiX), see their fai file: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz.fai Note that two sequences (MT, PhiX) are circular and have been extended to facilate alignment. The correct incantation to wrap these alignments to their correct length is bam-rewrap MT:16569 phiX:5386 or bam-rmdup -z MT:16569 -z phiX:5386
biostars
{"uid": 340638, "view_count": 3373, "vote_count": 1}
Hi there, When I am running the DESeq pipeline on the dds object and getting this error message. > dds_res<-DESeq(dds_PvsN) estimating size factors Error in estimateSizeFactorsForMatrix(counts(object), locfunc = locfunc, : every gene contains at least one zero, cannot compute log geometric means In addition: Warning message: In class(object) <- "environment" : Setting class(x) to "environment" sets attribute to NULL; result will no longer be an S4 object What strategy should be applied to resolve this conflict? Thank you very much! Imran
I encountered this error only once in the past. It is as stated: every gene in your data has at least one zero value, and this creates an issue for the size-factor calculation. Solutions: 1. add a pseudo-count value of '1' to your data 2. use: `estimateSizeFactors(dds_PvsN, type = 'iterate')` Kevin
biostars
{"uid": 440379, "view_count": 15833, "vote_count": 3}
I want to gseGO() provided by clusterProfiler R package. My gene list is the unregulated markers from a cluster generated by UMAP. The number is log2 fold change. ```r str(head(cluster7.genes)) Named num [1:6] 2.14 1.81 1.76 1.71 1.67 ... - attr(*, "names")= chr [1:6] "10082" "9353" "53353" "55553" ... cluster7.enriched <- gseGO(geneList = cluster7.genes, OrgDb = org.Hs.eg.db, keyType = "ENTREZID", ont = "BP", pvalueCutoff = 1, verbose = FALSE) Warning messages: 1: In preparePathwaysAndStats(pathways, stats, minSize, maxSize, gseaParam, : There are duplicate gene names, fgsea may produce unexpected results. 2: In preparePathwaysAndStats(pathways, stats, minSize, maxSize, gseaParam, : All values in the stats vector are greater than zero and scoreType is "std", maybe you should switch to scoreType = "pos". ``` But I don't know why it returns errors even when the `pvalueCutoff` is 1. Do I need to change `scoreType` to `"pos"` to solve this error? If so, how to do it since I can't see any relevant parameters available in `gseGO()` for me to change? Thanks. ![enter image description here][1] [1]: /media/images/b5d2edcb-d3a1-4e12-b5e3-ed40e6e3
It has given you a warning message, not an error. You should take a look at the contents of cluster7.enriched to see if it has yielded any results. When doing a preranked GSEA like this, the inputs are generally a mixture of up and down regulated genes, so there will be positive and negative numbers. Some folks like to use the log fold changes as input. I like to use the DESeq test statistic. Also it looks as though you have duplicate gene names. This could be a problem and it is best to explicitly resolve this before GSEA. Some approaches used are to take the mean of duplicate genes, or just keep the entry with the most extreme value.
biostars
{"uid": 9550488, "view_count": 613, "vote_count": 1}
Hi all. Does any know of a script or code that can convert a GVF file to VCF? There is a reference to a link https://code.google.com/p/gvf2vcf/ but it is not accessible anymore. Any help will be appreciated. Thanks
<p>Found a perl script <a href="https://github.com/hxin/DisEnt/blob/master/disnet/common/lib/ensembl-api/ensembl-variation/scripts/misc/gvf2vcf.pl">here</a> that should do it (scroll to bottom for usage instructions)</p>
biostars
{"uid": 128714, "view_count": 5779, "vote_count": 2}
Hi, I want to download some publicly available differential gene expression datasets for my work. Not sure where to start with and it will be great to get some tips on where to look for. I checked the GEO for datasets and found files in soft format, but I am looking for files in tab-delimited ASCII text format (similar to the ones used for GSEA)
Downloading the differential expression data (AKA the actual research finding of a paper) usually turns out to be surprisingly convoluted. It shows the sorry state of the bioinformatics data distribution. The SOFT files are in fact, ASCII text and tabular format, but you will need to do some parsing to make sense of (drop the initial header lines at the very least). That being said SOFT files represent measures and will not contain other all information that is typically used in a differential expression study. Data tables with fold changes, p-values, corrected p-values etc are usually only available in the supplementary information published with the paper, usually will be non-standardized, interpreting them requires some manual intervention.
biostars
{"uid": 344885, "view_count": 1046, "vote_count": 1}
<p>Hi all,</p> <p>I am not understanding how to download a full matrix of a TCGA subset (e.g. Colon Adenocarcinoma).</p> <p>I selected Download Data &gt; Data matrix &gt; COAD Data matrix but I am getting a lot of files (level 3).</p> <p>I want only the matrix of genes x samples. Have I to assemble it or can I download it somewhere?</p> <p>Thanks</p>
Hi, you were searching at the right place. Indeed you will find plenty of txt files when you select "RNA-Seq" in the data matrix. There are gene expression files for each individual seperately. In your case, i assume you want the files endlich with "gene.quantification.txt" as the contain gene and RPKM. Another possibility would be https://genome-cancer.ucsc.edu/proj/site/hgHeatmap/. Select your tumor and analysis type of interest (Buton "Add datasets") and on the left side (mouse over) you will find a symbol which downloads the data. I hope that helps, Sebastian
biostars
{"uid": 102543, "view_count": 17614, "vote_count": 4}
I want to extract **gene name** , **gene start position** and **gene stop position** from the fasta header of the fasta file. I have tried to extract based on the position but those locations are not consistent. Is there any other way to extract them ? This is what I have tried so far. #I have a vector of these file names. Here I have just one element names1 =>"lcl|NC_005336.1_cds_NP_957781.1_1 [locus_tag=ORFVgORF001] [db_xref=GeneID:2947687] [protein=ORF001 hypothetical protein] [protein_id=NP_957781.1] [location=complement(3162..3611)] [gbkey=CDS]" #Then I extracted words from the string list string_list1 <- str_extract_all(names1, boundary("word")) #result string_list1[1] [[1]] [1] "lcl" "NC_005336.1_cds_NP_957781.1_1" [3] "locus_tag" "ORFVgORF001" [5] "db_xref" "GeneID" [7] "2947687" "protein" [9] "ORF001" "hypothetical" [11] "protein" "protein_id" [13] "NP_957781.1" "location" [15] "complement" "3162" [17] "3611" "gbkey" [19] "CDS" So, I was trying to extract 4th ,16th and 17th element from this list. It works for this particular example. This does not work for other headers where these positions are different. Usually, gene name is consistently present at the 4th position. But, the start and stop location differ among the fasta headers. So, this strategy is not working and I can't think of any other strategy.
Here is the start: # example data x <- c("lcl|NC_005336.1_cds_NP_957781.1_1 [locus_tag=ORFVgORF001] [db_xref=GeneID:2947687] [protein=ORF001 hypothetical protein] [protein_id=NP_957781.1] [location=complement(3162..3611)] [gbkey=CDS]", "lcl|NC_001111_NP_999_1 [locus_tag=Test001] [db_xref=GeneID:2947687] [protein=ORF001 hypothetical protein] [protein_id=NP_957781.1] [gbkey=CDS]") f1 <- function(x, pattern){ lapply(strsplit(x, " "), function(i){ grep(pattern, i, value = TRUE) }) } f1(x, "locus_tag") # [[1]] # [1] "[locus_tag=ORFVgORF001]" # # [[2]] # [1] "[locus_tag=Test001]" f1(x, "location") # [[1]] # [1] "[location=complement(3162..3611)]" # # [[2]] # character(0)
biostars
{"uid": 443876, "view_count": 778, "vote_count": 1}
Hi, I want to calculate statistics of missing data per each site in my vcf file. Using `vcftools --missing-site` gives wrong stats for several sites. Is there is any other way to calculate it? Thank you! **I have 36 samples and here is an example of the `vcftools --missing-site` output for four positions below** ``` chr1 10616 rs376342519 CCGCCGTTGCAAAGGCGCGCCG C 194.49 PASS AC=3;AF=0.083;AN=36;DB;DP=132;ExcessHet=0.1902;FS=0;InbreedingCoeff=0.4307;MLEAC=5;MLEAF=0.139;MQ=38.12;MQRankSum=-0.842;QD=24.31;SOR=1.329;VQSLOD=-6.944;culprit=MQ GT:AD:DP:GQ 0/0:2,0:2:6 ./.:0,0:0:. ./.:0,0:0:. ./.:0,0:0:. 0/0:5,0:5:15 0/0:2,0:2:3 ./.:0,0:0:. 1/1:0,3:3:10 ./.:0,0:0:. ./.:0,0:0:. 0/0:9,0:9:24 0/0:11,0:11:27 ./.:0,0:0:. ./.:0,0:0:. 0/0:9,0:9:15 0/0:19,0:19:54 0/0:4,0:4:12 0/0:9,0:9:2 ./.:0,0:0:. 0/0:9,0:9:9 0/0:8,0:8:18 0/0:8,0:8:24 0/0:6,0:6:15 ./.:0,0:0:. ./.:0,0:0:. 0/0:2,0:2:3 0/0:6,0:6:6 ./.:0,0:0:. ./.:0,0:0:. ./.:0,0:0:. 0/0:8,0:8:24 ./.:0,0:0:. 0/1:3,2:5:76 ./.:0,0:0:. ./.:0,0:0:. ./.:2,0:2:. chr2 120680534 rs74654922 A ATGTGTGTG 12887.7 PASS AC=7;AF=0.175;AN=40;DB;DP=751;ExcessHet=3.0103;FS=0;InbreedingCoeff=0.5714;MLEAC=12;MLEAF=0.3;MQ=58.52;NEGATIVE_TRAIN_SITE;QD=29.16;SOR=0.883;VQSLOD=-1.646;culprit=MQ GT:AD:DP:GQ 1|1:0,10:12:30 0|0:0,1:17:48 .|.:0,2:16:. 0|0:0,1:21:60 0/0:0,1:6:16 ./.:38,0:38:. 0|0:0,0:24:72 ./.:21,0:21:. 0/0:0,0:13:39 .|.:0,1:23:. 0/0:0,2:11:48 0|0:0,1:22:63 .|.:0,0:18:. ./.:20,0:20:. ./.:15,0:15:. 1/0:0,12:30:99 0|0:0,0:12:36 1|1:0,19:20:57 .|.:0,1:25:. ./.:22,0:22:. 0|0:0,0:19:57 0|0:0,0:13:39 .|.:0,1:16:. 0|0:0,0:26:78 ./.:26,0:26:. 0|0:0,0:29:87 0|0:0,1:29:84 1|1:0,19:21:57 .|.:0,0:23:. 0|0:0,2:23:63 ./.:14,0:14:. 0|0:0,0:20:60 ./.:11,0:11:. 0|0:0,0:15:45 ./.:32,0:32:. ./.:35,0:35:. chr22 50808269 . AG A 105.55 PASS AC=2;AF=0.059;AN=34;DP=1217;ExcessHet=0.1296;FS=0;InbreedingCoeff=0.1633;MLEAC=7;MLEAF=0.206;MQ=36.17;NEGATIVE_TRAIN_SITE;QD=30.35;SOR=2.584;VQSLOD=-3.491;culprit=MQ GT:AD:DP:GQ ./.:22,0:22:. 0/0:38,0:38:0 ./.:24,0:24:. ./.:20,0:20:. ./.:26,0:26:. ./.:43,0:43:. ./.:26,0:26:. ./.:19,0:19:. 0/0:30,0:30:0 ./.:26,0:26:. ./.:27,0:27:. 0/0:43,0:43:0 0/0:22,0:22:0 ./.:31,0:31:. ./.:28,0:28:. 0/0:34,0:34:0 ./.:26,0:26:. 1/1:0,2:7:7 0/0:51,0:51:0 0/0:33,0:33:0 ./.:21,0:21:. ./.:9,0:9:. 0/0:22,0:22:0 0/0:28,0:28:0 0/0:41,0:41:0 0/0:61,0:61:0 ./.:54,0:54:. 0/0:43,0:43:0 ./.:36,0:36:. 0/0:52,0:52:0 ./.:17,0:17:. 0/0:51,0:51:0 ./.:19,0:19:. 0/0:63,0:63:0 ./.:34,0:34:. 0/0:43,0:43:0 chr18 67740193 . C CTTTTTTTTTTTTTT 8423.06 PASS AC=9;AF=0.196;AN=46;DP=616;ExcessHet=0;FS=0;InbreedingCoeff=0.618;MLEAC=10;MLEAF=0.217;MQ=57.96;NEGATIVE_TRAIN_SITE;QD=26.16;SOR=0.956;VQSLOD=-2.121;culprit=MQ GT:AD:DP:GQ .|.:0,4:15:. .|.:0,5:15:. 1/1:0,2:5:42 .|.:0,2:14:. 1/1:0,5:6:0 0/0:29,0:29:0 0/0:15,0:15:6 0/0:0,1:6:14 0/0:13,0:13:0 .|.:0,4:12:. 0|0:0,6:26:60 .|.:0,3:11:. .|.:0,1:6:. 0/0:0,1:4:20 1|1:0,20:25:60 0/0:18,0:18:0 .|.:0,4:15:. .|.:0,4:18:. 0/0:29,0:29:12 0/0:0,5:15:90 0/0:0,2:6:18 0/0:0,1:6:38 .|.:0,4:15:. 0|0:0,1:17:46 0/0:0,3:9:44 .|.:0,6:12:. 0|0:0,1:18:51 0/0:21,0:21:6 .|.:0,8:16:. .|.:0,5:17:. .|.:0,4:14:. 1/1:0,1:1:0 0|0:0,2:27:75 1/0:0,1:5:38 0/0:24,0:24:15 0/0:16,0:16:9 ``` **And here is the output of `vcftools --missing-site`** ``` CHR POS N_DATA N_GENOTYPE_FILTERED N_MISS F_MISS chr1 10616 72 0 36 0.5 chr2 120680534 66 0 26 0.393939 chr22 50808269 72 0 38 0.527778 chr18 67740193 59 0 13 0.220339 ``` **When in reality it should be this** ``` chr1 10616 72 0 36 0.5 chr2 120680534 72 0 32 0.44 chr22 50808269 72 0 38 0.527778 chr18 67740193 72 0 26 0.361 ```
This python code would work for this example: import sys infile = sys.argv[1] try: ofile = sys.argv[2] except IndexError: ofile = "/dev/stdout" with open(infile, 'r') as fin: with open(ofile, 'w') as ofile: ofile.write("\t".join(["CHR", "POS", "N_DATA", "N_GENOTYPE_FILTERED", "N_MISS", "F_MISS\n"])) for line in fin: if line.startswith("#"): continue line = line.strip().split() chrom = line[0] pos = line[1] missing = 0 for smp in line[9:]: missing = missing + smp.split(":")[0].replace("|", "/").split("/").count(".") total = 2 * len(line[9:]) oline = "\t".join([chrom, pos, str(total), "0", str(missing), str(round(missing / total, 6)), "\n"]) ofile.write(oline) Assuming it's in a file called. e.g. `missing_sites.py`, it can be invoked by: python missing_sites.py input.vcf or python missing_sites.py input.vcf outfile
biostars
{"uid": 9521326, "view_count": 804, "vote_count": 1}
Dear all, greetings i'd like to ask you for a piece of advise please : we have 3 scRNA-seq samples that were sequenced at different depths (200 mil reads, or 800 mil reads, 900 mil reads), and consequently, we do see : -- distinct numbers of cells, and -- (on average) distinct number of genes/cell, depending on the sample would the integration of these samples with CELLRANGER AGGR be a good approach (it does normalize the samples too), followed by standard analysis of the AGGREGATED SAMPLES with SEURAT, or SimpleSingleCell pipeline ? thank you very much, -- bogdan
I would: **Option 1:** > Independently quantify genes of each sample. --> Normalize to 10,000 > reads per cell (Default in most scRNA analysis) / **ScTransform** --> Transform the > matrix to square root (instead of log2(counts+1) --> Merge the three > matrices (`cbind`)--> Remove genes that are lowly expressed in less > than 1 or 2 or 5% of the cells --> Use combat to remove batch effects > (here three batches) --> import the matrix to Seurat --> Skip > Normalisation --> PCA/UMAPClustering etc... I am pretty sure the cells > will be clustered by cell-types rather than samples. If you want to see gene expression changes across clusters, I would introduce an extra step of imputation. So It would be: > Independently quantify genes of each sample. --> Normalize to 10,000 > reads per cell (Default in most scRNA analysis) --> Transform the > matrix to square root (instead of log2(counts+1) / **ScTransform**--> Merge the three > matrices --> Remove genes that are lowly expressed in less than 1 or 2 > or 5% of the cells --> Use combat to remove batch effects (here three > batches) --> `Impute gene expression` (For example [MAGIC][1]) --> > import the matrix to Seurat --> Skip Normalisation --> > PCA/UMAPClustering etc... Take average of gene expression for each cluster and calculate a cluster specificity score ([Tau Score for example][2]) and them take genes with Tau score more than 0.5 or 0.3 and Perform K-means clustering of averaged gene expression across clusters to pick markers. **Option 2:** Use Seurat (v3) CCA analysis to [integrate datasets][3]. Straightforward. It performs **SCTransform** instead of Library size normalisation, which seems to be better for scRNA data but it depends on end goal. **Option 3:** Or if you want to use seurat default differential analysis, start with raw counts but use the knn graph from above analysis and proceed with typical marker analysis or differential gene expression analysis. Its all custom analysis but works pretty well and its fun. [1]: https://www.cell.com/cell/fulltext/S0092-8674(18)30724-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867418307244%3Fshowall%3Dtrue [2]: https://academic.oup.com/bib/article/18/2/205/2562739 [3]: https://satijalab.org/seurat/v3.1/integration.html
biostars
{"uid": 402941, "view_count": 4898, "vote_count": 2}
<p>Hi,</p> <p>I would like to know how could I download all the introns (in FASTA) of a species from Ensembl via web.</p>
<p>Hi,</p> <p>As far as I know, we don't store the intron sequences explicitly anywhere.</p> <p>Here's a piece of Perl code that uses the Ensembl Perl API to fetch all intron sequences for the transcripts overlapping a particular region on the first human chromosome. It can easily be modified to fetch all transcripts in the species and to dump the sequence to a file instead of to the screen:</p> <pre><code>use strict; use warnings; use Bio::EnsEMBL::Registry; use Bio::EnsEMBL::Utils::SeqDumper; my $registry = 'Bio::EnsEMBL::Registry'; $registry-&gt;load_registry_from_db( '-host' =&gt; 'ensembldb.ensembl.org', '-port' =&gt; '5306', '-user' =&gt; 'anonymous', '-db_version' =&gt; '63' ); my $sa = $registry-&gt;get_adaptor( 'Human', 'Core', 'Slice' ); my $slice = $sa-&gt;fetch_by_region( 'Chromosome', '1', 12_000, 13_000 ); my $dumper = Bio::EnsEMBL::Utils::SeqDumper-&gt;new(); foreach my $transcript ( @{ $slice-&gt;get_all_Transcripts() } ) { foreach my $intron ( @{ $transcript-&gt;get_all_Introns() } ) { $dumper-&gt;dump( $intron-&gt;feature_Slice(), 'FASTA' ); } } </code></pre> <p>I hope this helps.</p>
biostars
{"uid": 10988, "view_count": 7929, "vote_count": 8}
Hi all, I'm analysing some microarray data and just calculated p-values using a t-test between 2 conditions (each with 3 replicates; total number of genes 14000). My p-values and adjusted p-values are coming out exactly the same. I've tried both FDR and bonferroni for correction using the p.adjust function in R and the results are the same. Is it possible to have the exact same p-values and adjusted p-values. I'm getting a bit doubtful. Any thoughts. Thanks!!!
It's impossible to have the same p-value and adjusted p-value after bonferroni correction with 14000 genes. There must be an error with the code at some point, please post it. Also, you want to use limma rather than directly doing a T-test.
biostars
{"uid": 224961, "view_count": 4232, "vote_count": 1}
Hi, I have two lists of genes mycounts <- read.csv("geneID and length.csv", header = T, sep = "\t", stringsAsFactors = FALSE) ``` > colnames(mycounts) [1] "genesID.geneslength" > head(mycounts[1:4,]) [1] "R0010W,1272" "R0020C,1122" "R0030W,546" "R0040C,891" > dim(mycounts) [1] 7130 1 ``` mycounts1 <- read.table("read.txt", header = T, sep = "\t", stringsAsFactors = FALSE) ``` > dim(mycounts1) [1] 5961 1 > colnames(mycounts1) [1] "Freq" ``` How I can have only genes in my read file in my genes file? I mean genes file has 7130 that I only need 5961 of them May you help me please? Thank you
See `%in%` operator, e.g.: data1_subset <- data1[ data1$genes %in% data2$genes, ] Or we can use `merge`, e.g.: data1_merge <- merge(data1, data2, by = "gene")
biostars
{"uid": 171078, "view_count": 2678, "vote_count": 1}
I have a long list of species names. I was wondering whether a tool was available online, which could fetch the NCBI IDs?
try [taxonkit](https://github.com/shenwei356/taxonkit), providing single binary files for Windows/Linux. It's really fast. $ cat names.txt Homo sapiens Akkermansia muciniphila ATCC BAA-835 Akkermansia muciniphila Mouse Intracisternal A-particle Wei Shen uncultured murine large bowel bacterium BAC 54B Croceibacter phage P2559Y $ time taxonkit name2taxid names.txt [INFO] parsing names file: /home/shenwei/.taxonkit/names.dmp [INFO] 1587755 names parsed Homo sapiens 9606 Akkermansia muciniphila ATCC BAA-835 349741 Akkermansia muciniphila 239935 Mouse Intracisternal A-particle 11932 Wei Shen uncultured murine large bowel bacterium BAC 54B 314101 Croceibacter phage P2559Y 1327037 real 0m7.217s user 0m10.682s sys 0m0.471s $ time taxonkit name2taxid names.txt | taxonkit lineage -i 2 Homo sapiens 9606 cellular organisms;Eukaryota;Opisthokonta;Metazoa;Eumetazoa;Bilateria;Deuterostomia;Chordata;Craniata;Vertebrata;Gnathostomata;Teleostomi;Euteleostomi;Sarcopterygii;Dipnotetrapodomorpha;Tetrapoda;Amniota;Mammalia;Theria;Eutheria;Boreoeutheria;Euarchontoglires;Primates;Haplorrhini;Simiiformes;Catarrhini;Hominoidea;Hominidae;Homininae;Homo;Homo sapiens Akkermansia muciniphila ATCC BAA-835 349741 cellular organisms;Bacteria;PVC group;Verrucomicrobia;Verrucomicrobiae;Verrucomicrobiales;Akkermansiaceae;Akkermansia;Akkermansia muciniphila;Akkermansia muciniphila ATCC BAA-835 ... real 0m13.151s user 0m23.251s sys 0m0.948s
biostars
{"uid": 255055, "view_count": 2876, "vote_count": 1}
Does anyone have a simple solution to downloading all the refseq genomes for a particular taxon? Using `ncbi-genome-download` its possible to specify the species or genus TaxIDs and download them, but apparently you can't go higher up the taxonomic ranks (even though enterobacteria has a TaxID of 543 for instance). If anyone knows of a way to download all the Enterobacteria I'm all ears. Alternatively, if there is a method of extracting the species TaxIDs from the Enterobacterial taxid in NCBI such that I can pass them all directly to `ncbi-genome-download` that would work too.
Maybe this would do the trick! esearch -db genome -query "txid543 [Organism]"|elink -target nuccore|efilter -query "RefSeq"|efetch -format fasta
biostars
{"uid": 302533, "view_count": 2271, "vote_count": 1}
<p>Hello all,</p> <p>I wanted to get a poll of what distros are currently popular with researchers working in bioinformatics and related biological data mining fields. </p> <p>Mainly: </p> <ul> <li>What linux distribution do you currently (or have previously) use(d)? </li> <li>What software and language (packs) do you use on a daily basis? -</li> <li>Where does this distro outperform the rest? Where does it fall short? </li> </ul> <p>I'll start; I currently use Ubuntu and have for the past two years. I mainly use vim with BioPerl scripts. Ubuntu, which is debian based makes installing foreign libraries almost painless. It falls short when it comes to software updates.</p> <hr />
<p><a href="http://nebc.nerc.ac.uk/tools/bio-linux/">BioLinux</a> works very well. It is equipped with a lot of bioinformatics-related software, and is based on a Ubuntu system. You can also use an Ubuntu system directly, and use BioLinux as a repository for bioinformatics software.</p> <p><img src="http://nebc.nerc.ac.uk/tools/bio-linux/portal_skins/nebcHomeTheme_customplonetheme_custom_images/bio-linux_dvd_2.png" alt="alt text"/></p>
biostars
{"uid": 16778, "view_count": 29581, "vote_count": 15}
<p>Is there a Picard Tool to filter one or more reads through the following : </p> <ul> <li>FLAG (second column) </li> <li>Mapping Quality (MAPQ 5th Column) </li> <li>TLEN (9th Column)</li> </ul> <p>?? </p>
<p>You can use <a href='https://github.com/pezmaster31/bamtools'>BamTools</a> with its <strong>filter</strong> utility. You need to write a .json file like this one:</p> <pre><code>{ "isMapped" : "true", "mapQuality" : "&gt;50", "isPaired" : "true" } </code></pre> <p>And run command</p> <pre><code>bamtools filter -in in.bam -out out.bam -script filter.json </code></pre> <p>See a tutorial <a href='https://github.com/pezmaster31/bamtools/wiki/Tutorial_Toolkit_BamTools-1.0.pdf'>here</a>.</p> <p>I don't see a way to extract TLEN with it, though.</p>
biostars
{"uid": 47630, "view_count": 8655, "vote_count": 4}
Sorry if this might be a trivial question! I read a lot about this until I got lost. I need to download wgs VCF file from the 1000 genomes ftp site. I need the snps (snvs and indels), most importantly, I need to have the individual genotypes of all the persons involved. so for example, this file : [ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ALL.wgs.phase3_shapeit2_mvncall_integrated_v5b.20130502.sites.vcf.gz][1] which was referenced many times on biostars, does not contain individual genotypes. I need something similar to what [those files][2] contain. Is there one global file containing snps/indels for wgs data including genotypes of the various samples ? thanks! [1]: http://%20ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ALL.wgs.phase3_shapeit2_mvncall_integrated_v5b.20130502.sites.vcf.gz [2]: ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/phase1/analysis_results/integrated_call_sets/
You can download the entire data per chromosome (chr1-22 & chrX) —including individual genotypes for both indels and SNPs— using this code: prefix="ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ALL.chr" ; suffix=".phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz" ; for chr in {1..22} X; do wget $prefix$chr$suffix $prefix$chr$suffix.tbi ; done From: https://www.biostars.org/p/271694/ Kevin
biostars
{"uid": 333212, "view_count": 2160, "vote_count": 1}
I am an IT guy developing a database for genotype data of samples and I am confused when I look up a snp in NCBI, the genes listed for that SNP sometimes have multiple consequences (intron, missense, upstream, etc.) but on the data that I received only one of those consequences is listed.
Hello , you have to get familiar with the concept of [transcripts][1] to understand this. In short: The sequence of the **gene** can encode for multiple (slightly) different **protein** sequences. fin swimmer [1]: https://www.biostars.org/p/244850/
biostars
{"uid": 341074, "view_count": 1251, "vote_count": 1}
Hi guys, First I just want to say, I know this has been asked numerous times and in a number of places. However my confusion has increased progressively! If someone could set the record straight, please, on how to generate a TMM normalized counts using EdgeR, I would be incredibly grateful! This post has two answers that look like disagree with one another... https://www.biostars.org/p/317701/ The main reason I want to generate TMM matrix from my count matrix is to compare gene expression levels within samples (see the level of multiple cell-type markers within a single sample) in a bar plot, and then generate the same bar plot for every other respective sample to compare between samples. I think using TMM normalized counts will allow me to compare between samples and within samples according to here: https://hbctraining.github.io/Training-modules/planning_successful_rnaseq/lessons/sample_level_QC.html I plan to use something like this to generate the bar plots: https://www.biostars.org/p/340485/#341008 Any help would be appreciated. Thank you in advance :)
The default normalization in edgeR can be broken down to two steps: ---------- 1) normalization by library size. That is simply the correction for read depth. While this may probably be good enough when there are no widespread changes in library compisition (=samples are very similar and only very few genes are differential), this often is not good enough. See for an example my answer here (https://www.biostars.org/p/9465851/#9465854) using GTEx data where I compare pancreas and lung transcriptomes, so one would expect **notably** different gene expression profiles. As you'll see plain per-million scaling results in biased normalized counts while TMM manages to properly center the bulk of genes at y=0 in the MA-plot. ---------- 2) the introduction of normalization factors that correct the library size-scaled values for the compositional component. This here is what the Trimmed Mean of M-values (TMM) does. For technical details see the original paper by [Robinson & Oshlack in Genome Biology from 2010][1]. ---------- Points 1) and 2) are then combined to calculate the **effective library size** which is then used to divide the raw counts by to obtain normalized counts, also often referred to as TMM-normalized counts or `cpm`. In practice: #/ make the DGEList: y <- DGEList(...) #/ calculate TMM normalization factors: y <- calcNormFactors(y) #/ get the normalized counts: cpms <- cpm(y, log=FALSE) The `cpm` function uses the normalization factors (given that `calcNormFactors` was run on that DGEList) internally. If not, then `cpm` just return the plain per-million scaled factors. [1]: https://genomebiology.biomedcentral.com/articles/10.1186/gb-2010-11-3-r25
biostars
{"uid": 9475236, "view_count": 4748, "vote_count": 2}
I am trying to use PAML for the first time, and it's thrown up a few errors I am struggling to fix. The manual is particularly tricky to read, but I've solved most of the other problems I've come across. I am aiming to use it to calculate dN/dS for sets of genes. I used RAxML to generate my trees in Newick format, but the error message I get after executing codeml is this: > Error: need branch labels in the tree for the model.. My tree looks like this: (chrI_BS45_CA_R_ENSGACG00000022906:2,chrI_BS49_CA_R_ENSGACG00000022906:1,chrI_BS43_CA_R_ENSGACG00000022906:1,chrI_BS47_CA_R_ENSGACG00000022906:1,chrI_BS51_CA_R_ENSGACG00000022906:1,chrI_BS53_CA_R_ENSGACG00000022906:1):1.0; And my control file looks like this: seqfile = /data/12_PAML/Input/Chr1/CA_R_ENSGACG00000022906_pruned.fa outfile = /data/12_PAML/Output/Chr1/CA_R_ENSGACG00000022906_Codeml_Output.txt treefile = /data/12_PAML/Input/Chr1/CA_R_ENSGACG00000022906_pruned.tre noisy = 9 verbose = 1 runmode = 0 seqtype = 1 CodonFreq = 0 model = 2 NSsites = 2 icode = 0 fix_kappa = 0 kappa = 1 fix_omega = 0 omega = 1 cleandata = 1 I've tried looking for similar answers and Ziheng has posted about similar problems saying to look at the examples folder. In it you get a trees like this: ((1,2) #1, ((3,4), 5), (6,7) ); or ((_10_H._fulgens, (((__4_H._kamtschatkana, (__1_H._rufescens, (__2_H._sorenseni, __3_H._walallensis))) #1, (__5_H._sieboldii, (__6_H._discus_hannai, __7_H._gigantea))), (__8_H._corrugata, __9_H._cracherodii))), _25_H._iris, ((_17_H._pustulata, (_24_H._t.coccinea, _23_H._t.tuberculata)), (_22_H._australis, ((_18_H._midae, ((_11_H._roei, (_12_H._scalaris, _13_H._laevigata)), (_14_H._cyclobates, (_15_H._rubra, _16_H._conicopora)))), ((_19_H._ovina, _21_H._varia), _20_H._diversicolor))))); I'm struggling to see the difference. I've tried removing the floating point values and running it again, but to no avail. Any help would be very much appreciated. And thank you for your time.
Turns out this was a very simple answer. For using any of the PAML models with selection, you need to designate which branch(es) on your gene tree or phylogeny where selection differs. The notation is #1 or #*n* if there are *n* branches with different strengths of selection.
biostars
{"uid": 357887, "view_count": 2340, "vote_count": 1}
Hi, I have mapped (paired-end) RNA-seq reads (so BAM file information) and I want to extract the nucleotide in the genome immediately prior to the (forward) mapping (essentially to test whether a RNAse digest has worked correctly). As it is in BAM format, then extracting the position of the read (chromosome and location) is straightforward (just samtools view it, and process the line output for the relevant field), but I was wondering if there was a quicker way than brute-force (i.e. extracting, sorting, then reading from the genome FASTA file).
Using my tool [**samslop**](https://github.com/lindenb/jvarkit/wiki/SamSlop) and [**bioalcidae**](https://github.com/lindenb/jvarkit/wiki/BioAlcidae): $ java -jar dist/samslop.jar -c -m 1 -M 1 -r ref.fa S1.bam |\ java -jar dist/bioalcidae.jar -F SAM -e 'while(iter.hasNext()) {var read=iter.next(); if(read.getReadUnmappedFlag()) continue; out.println(read.getReferenceName()+":"+read.getAlignmentStart()+"-"+read.getAlignmentEnd()+" "+read.getReadString().substr(0,1)+"/"+read.getReadString().substr(read.getReadLength()-1));}' | uniq | head rotavirus:1-71 G/A rotavirus:1-72 G/A rotavirus:1-53 G/A rotavirus:1-72 G/A rotavirus:1-65 G/A rotavirus:1-68 G/A rotavirus:2-73 G/T rotavirus:3-74 C/A rotavirus:3-69 C/T rotavirus:3-74 C/A
biostars
{"uid": 173201, "view_count": 2114, "vote_count": 1}
I calculate the pangenome for a set of bacterial genomes I am working with. I have a classical pangenome matrix that looks like this: speciesA speciesB ... speciesZ gene1 1 2 0 gene2 0 1 0 gene3 1 1 1 gene4 1 1 2 Now I would like to map the gen gain and loss in a phylogenetic tree. Something like this: ![enter image description here][1] I guess I could just record all families of genes shared among the different species in each branch of the tree, but I believe there is a more elegant way of doing that. Any suggestion? [1]: http://www.nature.com/nature/journal/v510/n7503/images/nature13400-f3.jpg
I guess you are looking for a software like Count, [that performs ancestral reconstruction of gene family sizes over a given tree][1]. But as [abascalfederico][2] mentioned, you should be very careful with the possibility of HGT. [1]: http://bioinformatics.oxfordjournals.org/content/26/15/1910.full "doi: 10.1093/bioinformatics/btq315" [2]: https://www.biostars.org/p/180593/#180615
biostars
{"uid": 180593, "view_count": 5668, "vote_count": 3}
So here is the result from cuffdiff, I only printed out certain columns for read: ``` test_id gene value_1 value_2 significant XLOC_226942 - 3.73672 0 yes XLOC_227336 - 6.80194 0 yes XLOC_227426 - 0 111172 no XLOC_227811 - 0 154.721 no XLOC_228955 - 0 9.80122 yes XLOC_229214 - 6.61332 0 yes XLOC_229754 - 8000.99 0 no XLOC_231416 - 0 714.61 no XLOC_231570 - 11.8838 0 yes XLOC_231696 - 109.917 0 no XLOC_231746 - 0 946.635 no XLOC_231910 - 2.8676 0 yes XLOC_231945 - 7.25977 0 yes XLOC_232519 - 2.42438 0 yes XLOC_232650 - 126.899 0 no XLOC_232742 - 25.7937 0 yes XLOC_233457 - 163.295 0 no ``` So why it shows no significant difference if one of value equal to 0 but another is over 100? Isn't that a huge different? And if one is 0 and another is less than 100, it considered significant. Do you guys have the explanation for this? Do I also need to consider those as significant expressed genes? Thanks so much!
This is a known issue with Cuffdiff, and one reason people generally prefer other programs (DESeq and edgeR) instead.
biostars
{"uid": 150114, "view_count": 2550, "vote_count": 2}
hello, I am trying to run gatk cnv caller tutorial and I am facing an error with respect to reference fasta file. I have installed gatk using conda. The code I am trying to run is: gatk PreprocessIntervals \ -R ~/tutorial/GRCh38/resources_broad_hg38_v0_Homo_sapiens_assembly38.fasta --padding 0 \ -L ~/tutorial/tutorial_11684/gcnv-chr20XY-contig.list \ -imr OVERLAPPING_ONLY \ -O ~/output/chr20XY.interval_list and the error i am getting is: > A USER ERROR has occurred: The specified fasta file (file:///home/username/tutorial/GRCh38/resources_broad_hg38_v0_Homo_sapiens_assembly38.fasta) does not exist. Please give some suggestions if there is any error in the code.
Have you checked to see if the file is readable? You can use ls -lh If it isn't readable, you can make it so using chmod. For example, you could use chmod a+r <your_file>
biostars
{"uid": 9520889, "view_count": 844, "vote_count": 1}
Dear Biostars folk, I have been trying to come up with an efficient way to return all the NON-hits from a blastn query. Basically, I need to blast all reads in an Illumina library against one another, and only find the reads in set A that have not hit in B. Although I have a solution, it seems to be really slow, and I wonder if you could help me out. Here's what I did so far: makeblastdb -in ${B} -dbtype nucl blastn -query ${A} -db ${B} > blast_results.dat cat blast_results.dat \ | grep 'No hits' -B 6 \ | grep 'Query' \ | cut -d '=' -f2 \ | cut -d ' ' -f2 \ > unique_read_ids.dat cat unique_read_ids.dat \ | while read line; do sed -n "/${line}/,/>/p" $query \ | sed '$d'; done \ > result.fn Now... this totally works, but it takes about a week to finish, although the blastn-part only take 1-2 days. So that last bit, where I fish out all the non-hits, it probably not so efficient. Is there any way to let blastn return NON-hits, instead of all queries / all hits? If so, how does that work? I've tried setting something up with the e-values, but it always seems to be a minimal value... Thanks for your time!
For anyone returning to this in the future, there's a biopython tutorial on doing exactly this, in a much more efficient way: https://biopython.org/wiki/Retrieve_nonmatching_blast_queries
biostars
{"uid": 457325, "view_count": 1053, "vote_count": 1}
Hi, I have two bam files from ICGC. During mutation calling, vardict/mutect/freebayes/varscan split bam to by its chromosomes. Those GL00 parts also appear just like chr1-2-3-4-5-6-7 appear. I check what they are and found they are contigs?!?(forgive my ignorance) How should I treat them ? Will they cause a problem during variant calling output interpretation? I read one previous thread about this but there were not any information about how to treat them. I need your help. Best, Tunc.
You can check http://genome.ucsc.edu/cgi-bin/hgGateway where they provide details for every assembly. This was addressed on the UCSC genome support forum: http://redmine.soe.ucsc.edu/forum/index.php?t=msg&goto=8701&S=5061c99adf3f4c4ee90f0ea362d838c7 > chr*_random - are called "unlocalized" sequences. The chromosome is > known, but not the location on the chromosome. chrUn_* - are called > "unplaced" sequences. They probably belong to the sequenced genome, > but placement is unknown at this time. > The GL numbers (or other types of numbers) in these names are the > genbank identification numbers which can be used in a nucleotide > search at Entrez. For example: chr1_GL456211_random - unlocalized > sequence belonging to chr1, NCBI identification: GL456211 > chrUn_JH584304 - unplaced sequence, NCBI identification JH584304
biostars
{"uid": 188628, "view_count": 3828, "vote_count": 3}
<p>Dear Biostar members,</p> <p>I am involved in analyzing data from NGS technologies. As you may know, raw data files are in fastq format. And the fact is that these files are huge (several gigabytes each).</p> <p>As a first step, we naturally want to make some quality controls of the sequencing step. My team and I are aware of some tools like <a href='http://picard.sourceforge.net/'>Picard</a> or <a href='https://github.com/pezmaster31/bamtools/'>bamtools</a> that give some information and statistics, but they do not allow to extract all the information we want. So, at some point, I am afraid we will have to parse fastq files ourselves.</p> <p>Which programming language would you choose to do the job (<i>ie</i> : read and process a 10GB file) ? Obviously, the main goal is to make the processing as fast as possible.</p> <ol> <li>awk/sed</li> <li>C</li> <li>Perl</li> <li>Java</li> <li>Python</li> </ol> <p>Do you think that we will be I/O bound anyway, so using a language or another makes no big difference ? </p> <p>Would you read the file line by line or load chunks of hundreds of lines (for instance) ?</p> <p>Many thanks. T.</p>
<p>The short answer is: it depends a lot on what exactly it is you need to do with the contents of the files.</p> <p>Regarding being I/O bound, I think it is quite unlikely unless you have the data on very slow disks. Even a single 5400rpm disk can read in the order of 50MB/s, which means that reading a 10GB file will not take much over 3min. Since you are working with large NGS datasets, I would assume that you have a RAID system, in which case the combined speed of multiple disks will almost certainly exceed how fast you can process the data.</p> <p>When it comes to speed, nothing beats a good C or C++ implementation. However, there is a huge price to pay in terms of development time. If what you do is pretty basic string operations, both Perl and Python should do fine, and the savings in development time are most likely well worth the slightly longer run time compared to a C or C++ implementation.</p> <p>In my experience, Perl has a more efficient implementation of regular expressions. Conversely, if you need to load all the data into memory, the Python data structures appear to be more memory efficient.</p> <p>Awk is the only of the languages that you list that I would advise you against. It is very convenient for writing short command-line hacks, but speed is not one of its virtues. I don't have any experience with using Java for bioinformatics tasks.</p>
biostars
{"uid": 5005, "view_count": 24805, "vote_count": 10}
<p>I have a directory with 50 bed files. I want to get all regions in common in at least 50% of my files. How would I get that? Using `bedopts` I can intersect all, but I want at least those existing regions in 25 files.</p>
This isn't counting the fraction of overlap between intervals, so you can't directly use available options in *bedops* or *bedmap*, but you can use some ID tricks to uniquely associate an interval with the BED file it comes from, and use that ID information to do counting and thresholding. Here is one way to do so: 1. Label your input BED files so that their IDs uniquely identify their intervals, *e.g.*, for BED files called `1.bed`, `2.bed`, *etc.*: ``` $ for idx in `seq 1 50`; do cut -f1-3 $idx.bed | awk -vidx=$idx '{ print $0"\t"idx; }' > $idx.id.bed; done ``` You would modify this loop depending on how you name your files. Basically, however you do this, the goal is to print the interval in the first three columns, and use the number of files you have as a unique identifier in the fourth column. You would print `1` in the fourth column of the first file `1.id.bed`, and `5` in the fourth column of the fifth file `5.id.bed`, and so on. I use numbers here, but you can use any unique identifier you want, instead. Cell type. Experiment or sample name, etc. The key part to making this work is that the chosen identifier is unique to that input file and that file only. 2. Take the union of all these ID-tagged files with BEDOPS *bedops --everything* and pipe the result to BEDOPS *bedmap* with the *--echo-map-id-uniq* operation: ``` $ bedops --everything *.id.bed | bedmap --echo --echo-map-id-uniq --delim '\t' - > all.bed ``` What you have at this point, for every interval from every input BED file, is a file called `all.bed`, which shows that interval in the first three columns, the file it comes from in the fourth column, and a fifth column that is a semi-colon delimited string of every unique ID value that overlaps that interval, *e.g.*: ``` ... chrN 123 456 5 2;5;6;7;11;12;...;46;48 chrN 234 567 5 5;6;7;8 ... ``` Because we use *--echo-map-id-uniq*, this string contains unique identifiers - no duplicates. This uniqueness property is how we can count how many of the original input files overlap. 3. To get an interval that has regions in common with at least 50% of your 50 input files - 25 files other than the original file the interval comes from - you can use *awk* to `split` this ID string in the fifth column by the semi-colon delimiter,* *counting how many unique IDs you find and printing the interval, if the threshold of 26 ID values is met (25 IDs other than the ID representing the original interval's source): ``` $ awk -vthreshold=26 '{ \ n_ids = split($5, ids, ";"); \ if (n_ids >= threshold) { \ print $0; \ } \ }' all.bed > thresholded.bed ``` A shorter, cleaner *awk* statement with the same functionality would be: ``` $ awk -vthreshold=26 '(split($5, ids, ";") >= threshold)' all.bed > thresholded.bed ``` **Bonus**: If you need to get back any additional columns of the original interval, you can just do another *bedmap* operation on `thresholded.bed` on a union of the 50 input BED files. ``` $ bedops --everything 1.bed 2.bed ... 50.bed > union.bed $ bedmap --echo --exact union.bed thresholded.bed > union-thresholded.bed ``` **Note**: Input files should be sorted before use with BEDOPS tools, if the sort state is unknown or different. BEDOPS includes a tool called <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/sorting/sort-bed.html">*sort-bed*</a> that sorts BED files faster than GNU *sort* or alternatives. $ sort-bed < foo.unsorted.bed > foo.sorted.bed You can use whatever you like here, so long as the BEDOPS sort criteria are met before passing data to *bedops* and *bedmap*.
biostars
{"uid": 172566, "view_count": 6304, "vote_count": 3}
hi everyone, I have just started learning genomics as a part of my bioinformatics degree and I've been introduced to linux for handling fastq files and using fast qc. can anybody suggest some good learning resources which is more inclined towards linux for genomics.
[The Linux Documentation Project][1] has many resources to read [1]: https://tldp.org/index.html
biostars
{"uid": 9485624, "view_count": 1269, "vote_count": 7}
I have fasta file namely `119XCA.fasta` as shown below, >cellulase ATGCTA >gyrase TGATGCT >16s TAGTATG I need to remove all the fasta headers, keep the sequences one by one and need to write file name as a fasta header. The expected outcome is shown below, >119XCA ATGCTA TGATGCT TAGTATG I have used the following script `sed '/^>/d' foo.fa > out.fa` which remove the fasta headers but, i do not know how to manage to write file name as a header. Therefore, please help me to do the same.
Assuming you're using BASH, use `basename` to get the filename with no PATH. Like: filename=$(basename -i file | cut -d'.' -f1) Then you could replace it using sed sed -i "s/^\>.*$/$filename/" your.fasta Remember to use double quotes to use variables in sed.
biostars
{"uid": 466652, "view_count": 1053, "vote_count": 1}
Hi, I am using the gencode v19 gtf file for annotation of my chipseq data ( source - ftp://ftp.sanger.ac.uk/pub/gencode/Gencode_human/release_19/) and the description of the format of the file (http://www.gencodegenes.org/gencodeformat.html) mentions that there is an optional field called transcript_support_level. However I cannot find that information for any of the records in the gtf file nor does any other data file in the release has that information. Surprisingly though, in the UCSC browser, for all gencode transcripts, the transcript support level has been mentioned ( which to me seems like that there should be a file somewhere that contains this tag). The gtf files from UCSC gencode track/Ensembl FTP also do not contain this record. Has anybody used this field before? and if so how? Does anybody have an idea on where can I source the data sheet that would contain the transcript support level information? Thanks!
<p>We brought them in for the most recent Ensembl/GENCODE release, which is Ensembl 77/GENCODE 21, so they will not be available in Ensembl 75/GENCODE 19. Also, at present they are not being included in GTF files.</p>
biostars
{"uid": 120522, "view_count": 4678, "vote_count": 1}
Hi, We have run a pilot RNA-Seq study and I used `edgeR` package to obtain differential expression results. The results output a `gene column` along with the `logCPM`, `logFC` and `p-value` column. I have a question regarding conversion of log2 scale data to linear data for certain analysis. How can this be accomplished. Is there any R package to do this? For instance, can I use the base function in R like below: Linear FC value = 2^(logFC) Linear CPM value = 2^(logCPM) Thank you, Toufiq
All results output from edgeR are on a log-2 scale, so yes you can unlog the logFC and logCPM values using the formula you give (as @rpolicastro already confirmed in his answer). Let me say philosophically though that I don't agree that unlogged values can be described as "linear". Expression results are usually best analysed on a log scale and unlogged values do not behave in a linear fashion for most purposes. Note it is especially important not to try to undo edgeR's logFC moderation, as mediated by the prior.count values, if you are working on an unlogged scale. If you really did want unmoderated fold-changes then you would simply set `prior.count=0` in the call to `exactTest` instead of doing your own ad hoc hacking. I can't imagine how that could be a good idea however. Getting infinite fold-changes from tiny counts (0 vs 1 for example) does not help anyone.
biostars
{"uid": 9485821, "view_count": 2650, "vote_count": 1}
Hello everyone, I don't really know how to put this, but I'm recently trying to improve myself in coding and making tools. I was mostly doing a lot of bash things because it was the easiest way. But I know I should review my habits to be more portable. This was my first year working on a cluster environment (SLURM), and I think I still don't get all the concepts behind it. I was fine with just doing simple "sbatch", "srun" or "sarray" command, but I would like to understand more about how it works. A concrete example : I want to create a python script that call a tool (minimap2) installed on our cluster. This tool can only be used after a "module load" of the appropriate module (in that case bioinfo/minimap2-2.5). So I was giving a shot with `subprocess.call` and `subprocess.run` (from which I understood they were sort of the same but not from the same python version ?) to perfom that module load. But it kept failing, and I could'nt find anything about it on the net. I feel like I'm doing something abberant... I'm not even sure if what I am trying to do is relevant (slurm module load from within python ?). Can anyone correct me and maybe explain what I'm doing wrong ? Maybe if you have any website or book to recommand to understand better this topic. I feel very nooby, but I still want to improve. Thanks for your help, sorry if not a proper Biostar question, Roxane
The reason you are having trouble loading a module on the HPC with `subprocess.call`, etc., is because each subprocess is executed in a new, independent process. So even if you did a `subprocess.call` for something like "module load xyz", the loaded module will not persist into your following calls to use that program. If you really wanted to do that, you would have to do something like `subprocess.call('module load xyz; run_my_program')`, probably with the `shell=True` option enabled I believe. Honestly your best bet is to use a workflow manager like Snakemake or [Nextflow](https://www.nextflow.io/). You should not couple your scripts directly to the HPC environment you are using, because they will then become unusable on other environments. For example on Nextflow, you can easily create a pipeline to run your script on your samples, and then configure Nextflow to submit the task as a batch job on [SLURM](https://www.nextflow.io/docs/latest/executor.html), and also configure it to first load the correct [modules](https://www.nextflow.io/docs/latest/process.html#module). Something like this would be the most portable, since it will allow you to update the execution configuration for a new system without having to modify any aspects of your pipeline's tasks.
biostars
{"uid": 395907, "view_count": 3614, "vote_count": 1}
<p>I have few motifs whose location on a given DNA sequence has been identified using MATCH (from TRANSFAC). MATCH provides the location of these matrices in a numerical way as motif 1 236 (+) (for an example - motif 1 is present at location 236). I want to get a graphical output by feeding the length of DNA sequence and the the position of matrices so that its easy for presentation and comparing two different sequences with almost similar motifs</p> <p>Thanks in advance for any suggestions</p>
<p>Perhaps look at <a href="http://weblogo.threeplusone.com">WebLogo</a>, which assists with making <a href="http://en.wikipedia.org/wiki/Sequence_logo">sequence logos</a>.</p>
biostars
{"uid": 54669, "view_count": 4475, "vote_count": 9}
I am dealing with a bunch of inversions and running some analysis on each of them .One of them is located on chrommose one, the starting position is 6199548 and ending position is 9699260. So the length of this position is 3499712 bp. I want to generate non-overlapping windows based on the length of this focal positions (3499712) and assign the focal position to one of these windows that overlap with the starting and ending position of my focal position. I am achieving this by using `GenomicRanges::tileGenome` function in `r`: seqlength <-c(chr1=159217232,chr2=184765313,chr3=182121094,chr4=221926752,chr5=187413619) bin_length<-c(3499712) bins<-tileGenome(seqlength,tilewidth=bin_length,cut.last.tile.in.chrom = T) #bins_u<-unlist(bins) write.table(bins,file=paste("bins_good_",bin_length,sep=""),col.names = FALSE, row.names = TRUE) } This generates non-overlapping 3499712 bps windows across the genome. window_id chromosome start end length "1" "chr1" 1 3499712 3499712 "2" "chr1" 3499713 6999424 3499712 "3" "chr1" 6999425 10499136 3499712 "4" "chr1" 10499137 13998848 3499712 "5" "chr1" 13998849 17498560 3499712 However I have a major problem here that I have not been able to sort it out. I want one of these windows exactly overlaps with the starting and ending position of my focal position (6199548:9699260), only the focal position not any other position outside this region. This means 2 windows (one before and one after the focal position) will have a different length which my analysis is fine with it. I just do not know how to do it? This is my desired output: window_id chromosome start end length "1" "chr1" 1 3499712 3499712 "*" "2" "chr1" 3499713 6199547 2699834 "*" "3" "chr1" 6199548 9699260 3499712 "*" "4" "chr1" 9699261 13998848 4299587 "*" "5" "chr1" 13998849 17498560 3499712 "*" As you see the the second window is now smaller, the third window accommodates the focal position and the forth window is a little larges than bin size. Does anyone have any idea how can I do this task in R? Help me please, I am really stuck!
I'm not sure how you did your calculation but for me it looks like your region is **3499713** bp in length. site <- GRanges("chr1:6199548-9699260") seqlength <- c(chr1=159217232,chr2=184765313,chr3=182121094,chr4=221926752,chr5=187413619) bin_length <- width(site) ## check length of region print(bin_length) ##3499713 bins <- tileGenome(seqlength, tilewidth=bin_length, cut.last.tile.in.chrom=TRUE) ## extract overlapping bins flank.bins <- subsetByOverlaps(bins,site) ## find new coordinates excluding region of interest new.flank.bins <- setdiff(reduce(flank.bins),site) ## combine new bins with your region of interest new.bins <- c(new.flank.bins,site) ## remove old bins bins <- subsetByOverlaps(bins,new.bins,invert=T) ## add new bins bins <- sort(c(bins,new.bins)) ## check that it makes sense ##table(width(bins[seqnames(bins) == 'chr1'])) write.table(bins,file=paste("bins_good_",bin_length,sep=""),col.names= FALSE, row.names = TRUE)
biostars
{"uid": 367947, "view_count": 1407, "vote_count": 1}
Hi, apologies for this basic question as I am new to the field. I have been checking my NGS data using FastQC and the checks fail on the Kmer content section. There appears to be a sequence TAGATCGGAA at position 90-100 bp in the reads which is enriched around 12 fold (Obs/Exp Max). However nothing shows up in the 'Overrepresented sequences' or 'Adapter content' sections of the report (complete flat line). The sequencing was done as BGI and I do not know what primers were used. My question is should I be trying to remove this Kmer sequence? <img alt="screenshot" src="http://i483.photobucket.com/albums/rr194/Kez_Cleal/Screen%20Shot%202015-05-28%20at%2013.11.25_zpsagvhw2h3.png" style="height:50%; width:50%" /> If I use grep on the first million reads in my fastq file to look for the sequence I only find 36 which appears to be quite a low number? $ gunzip -c data1.fq.gz | head -4000000 | grep TAGATCGGAA | wc -l Thanks for the help
FastQC often displays failed checks - it does not mean your data is bad, they are just warnings. The only thing you should always really check is the quality along the length of your reads, which might force you to do some trimming. In this case, I would say that your reads are ok as they are, and you can proceed to the alignment phase of your workflow.
biostars
{"uid": 144354, "view_count": 7236, "vote_count": 8}
<p>The plot is from <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575466/pdf/cc-12-379.pdf">this publication, see page 2</a> the bottom of the plot with ACTG letters in different colours and sizes.</p> <ol> <li>Could anyone clarify in layman&#39;s terms what it represents?</li> <li>More importantly, how do we plot it using R or any other tools?</li> </ol> <p>Here is the plot, if you can&#39;t access the paper:</p> <p><img alt="" src="https://dl.dropboxusercontent.com/u/32553873/StackOverflow/Dennisplot.PNG" style="height:198px; width:390px" /></p> <p>Found related post at SO: <a href="http://stackoverflow.com/questions/5438474/plotting-a-sequence-logo-using-ggplot2">Plotting a &ldquo;sequence logo&rdquo; using ggplot2?</a></p>
Here is [**motiflogo**][1], which can make a SNP-specific motif logo representation. See figure, please. <img alt="" src="http://i.imgur.com/9gJPnJx.png?1" style="height:521px; width:685px" /> [1]: https://github.com/zhilongjia/motiflogo
biostars
{"uid": 150308, "view_count": 6632, "vote_count": 2}
Dear all, I have a fasta file contatining marker ID's: >marker1 AGCTGGGTGTCGTTGATCTTCAGGTCCTTCTGGATGTACAGCGACGCTCCActcatcacg ccgatgtcgtagccgaggaggatggaggtcatggaggcgag >marker2 CTCGTCGGTTTCGACCACCATGTTGTCATGGGCATAGCATAGTGAACTCCACTCACAGCT AAAAAGGAACGGTGCCTGCCTTCTTTTCAGTATCTAATTAC >markern CATCTAATTTGTTGGACGGGGATATGCTGGGAAAACGAAGAGCAAATCTGTGCCCACTGT GCTGGTCGCTGCAGAATCTATTCCCCATTATGTTGCCTGTT I would like to align these markers against the reference genome and find the position of variant for every single marker if there any: #CHROM POS ID REF ALT chr1 1101439 marker1 C A chr1 1104710 marker2 A T chr1 1104748 markern C T I am trying to use both `bwa/bowtie` to align and `bcftools/gatk` to call variants, but I do not know how to include also the marker ID in the vcf file? OR MAY BE THIS IS POSSIBLE BY PARSING THE BLAST OUTPUT?
if you want to add the markerID, it's better to align with Blast and parse the result to generate the custom VCF
biostars
{"uid": 471441, "view_count": 1428, "vote_count": 1}
Hi, Is there a way I can introduce mutation in bam file at specific position of a chromosome without running the alignment. For example I would like to change base from A>C at Chr1:12345?
I quickly wrote: http://lindenb.github.io/jvarkit/Biostar404363.html ``` $ samtools view src/test/resources/toy.bam | tail -1 x6 0 ref2 14 30 23M * 0 0 TAATTAAGTCTACAGAGCAACTA ??????????????????????? RG:Z:gid1 $ cat jeter.vcf ref2 14 A C $ java -jar dist/biostar404363.jar -p jeter.vcf src/test/resources/toy.bam | tail -1 x6 0 ref2 14 30 23M * 0 0 CAATTAAGTCTACAGAGCAACTA ??????????????????????? PG:Z:0 RG:Z:gid1 NM:i:1 ```
biostars
{"uid": 404363, "view_count": 2292, "vote_count": 1}
Hi, I have managed to merge two different files using a common column which has the same information using the below function: Combine_Files <- merge(File1, File2, by="Symbols") However, I would like to know if its possible to merge two different *.csv files i.e. one file (file 1) with the gene symbols only (one gene per row), and the other file (file 2) with the gene symbols and other annotation columns (many related genes per row separated by comma but one of the gene is common as (file 1). I would like to merge both the files based on the mapping of the gene symbols from file1 and extract other annotations columns from (file 2) in the combined file for further data analysis. I would like to know how this could be done. For example: File_1 Gene_Symbols 1. GeneA1 2. GeneA2 3. GeneA3 For example: File_2 Gene_Symbols 1. GeneA1, GeneX1, GeneX2, GeneD 2. GeneA2, GeneL1, GeneP2, GeneNA 3. GeneB3, GeneA3, GeneLP1, GeneNA1 Other columns in File_2 Phenotype, GO Ontology, Pathways Expected Output Gene_Symbols, Phenotype, GO Ontology, Pathways Thank you, Toufiq
Use the `tidyr` package. The function `separate_rows` will allow you to take you file_2 and split each row so that each row has only one gene symbol, and the information in the other columns is copied. Thus GeneA1,GeneX1,GeneX2 WO cell processes Pathway 1 becomes GeneA1 WO cell processes Pathway 1 GeneX1 WO cell processes Pathway 1 GeneX2 WO cell processes Pathway 1 You can then do your merge as before.
biostars
{"uid": 399972, "view_count": 983, "vote_count": 1}
Hello guys, It may seem like a basic question, but this is causing confusion. What is the difference between fold change and Log fold change? Regards,
I find pictures helpful when thinking about these concepts, so quickly made some visualizations for you. They expand on WouterDeCoster's and h.mon's comments. The figures were made in R using with the following code: #Generate fake data fake <- data.frame("gene" = as.numeric(c(1:50)), "pre" = as.numeric(seq(1,99,2)), "post"= as.numeric(seq(99,1,-2))) #Calculate fold change fake$fold <- fake$post/fake$pre #Calculate log2 fold change fake$logfold <- log2(fake$fold) #Visualize library(ggplot2) ggplot(fake, aes(x = gene, y = fold)) + geom_line() ggplot(fake, aes(x = gene, y = logfold)) + geom_line() + ylab("Log2(FC)") Let's say you have 50 genes (for the sake of convenience, the genes are called "1", "2", "3", ... , "50") and you measure expression before and after some type of treatment. You then measure the fold change of the genes due to treatment: FC = expression(post) / expression(pre). As russhh indicated, the logFC is simply log2(FC). In this dataset, half the genes were upregulated and the other half downregulated. Interpreting the untransformed fold change is tricky: here it looks like gene1 had a huge fold change, close to 100, but what about genes30-50? It's hard to tell what their value is, and also by how much they differ from each other. ![enter image description here][1] Performing the log2 transformation scales the data and facilitates the interpretation, as illustrated below. ![enter image description here][2] The differences in expression in genes30-50 are now much more obvious. [1]: https://i.imgur.com/dD7RC7n.png [2]: https://i.imgur.com/5GywhTT.png
biostars
{"uid": 312980, "view_count": 33880, "vote_count": 7}
Hi Biostar, Apologies for this basic question but I can not find a simple answer I want to sequence several samples of mRNA paired end 2x150 and for each sample I want 100 million reads (100 million for forward and 100 million for reverse) the supplier proposes to sequence 3 samples per lane of flow cell of the Hiseq 4000 technology. So what I understood 3 samples x 200 million reads (100 million for forward and 100 million for reverse) = 600 million reads per lane but when I looked for the performances Illumina Hiseq 4000 I found this technology can generate 312000000 Clusters per lane only. How can this technology generate 600 million reads per lane ?? Would it be possible to give me a simple answer Thank you in advance
What is sequenced is a so called "fragment". To simplify, they will sequence each sample with 100 mln fragments, and each fragment is sequenced from both ends, i.e. paired end sequencing. And for each fragment you will have two reads - forward and reverse. So what the provider tells you is correct. Anyway, always ask them for details and explanations if you have doubts to avoid potential misunderstandings and waste of resources.
biostars
{"uid": 344950, "view_count": 3738, "vote_count": 1}
Hi guys, I've the following dataframe df: IID PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 1 11:00223.CEL 0.00229647 -0.000423608 0.001480000 1.02983e-03 0.00418171 -0.00550339 0.003826840 0.002836460 -0.001210240 2 9399.CEL 0.00213518 -0.000734612 0.000965396 9.84589e-04 -0.00135706 0.00396061 0.006356090 0.000639752 -0.000220536 3 shckanc.CEL 0.00225502 -0.000971542 0.001553290 2.13150e-03 -0.00277924 0.00291143 0.007620090 -0.002271640 -0.003364920 4 787323A1.CEL 0.00292725 -0.000399576 0.001340910 6.99230e-06 -0.00299630 -0.00769350 -0.007620050 0.002889230 0.005459370 5 90w3jc.CEL 0.00228869 -0.000201784 0.001291800 8.68436e-04 -0.00194812 -0.00554044 -0.006723900 0.001160370 0.005532150 6 olsj909.CEL 0.00224916 -0.000530798 0.000677401 -1.13087e-04 -0.00132783 -0.00814353 -0.000705561 0.000352494 0.000671816 Considering the first column IID, I need to extract all the string before the dot, excluding .CEL Just an example: for the first row, I need to extract `11:00223`, for the second row `9399` and so on... I tried using the library `stringr` and then the following command line: str_extract(df$IID, "[[:digit:]]+") but `[[:digit:]]+` extracts only number before the dot, while I need all the string before the dot, included letters of the alphabet. Any idea about how to extract all the string before the dot? Thank you!
gsub("\\..*", "", string)
biostars
{"uid": 9550112, "view_count": 480, "vote_count": 1}
Hi, I just started to work with single end reads, which are already trimmed for adapter sequences and quality. Do I have to trimm the reads now to the same length of e.g. 100nt for mapping them with STAR? Is there a negative effect, if I don't?
If the qualities are ok and there are no adapters you can proceed with mapping. There is a recent paper about trimming of RNAseq data and its possible consequence on downstream analysis - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4766705/
biostars
{"uid": 301761, "view_count": 4513, "vote_count": 2}
I just do some practice on ballgown and stringtie, and I got some GTF file and ballgown`s file. However, I just find I can't use R or something else to deliver a excel file which contain FPKM. I think the excel file I want maybe looks like this: gene_id FPKM A 124 B 541 C 122 Please help me Thanks a lot :)
You can use standard unix commands for this as follows: $ cat stringtie.txt 1 StringTie transcript 337772 338047 1000 + . gene_id "Zm00001d027250"; transcript_id "Zm00001d027250_T001"; cov "66.076088"; FPKM "8.407302"; TPM "14.141658"; 1 StringTie exon 337772 338047 1000 + . gene_id "Zm00001d027250"; transcript_id "Zm00001d027250_T001"; exon_number "1"; cov "66.076088"; 1 StringTie transcript 426764 432130 1000 + . gene_id "Zm00001d027254"; transcript_id "Zm00001d027254_T001"; cov "2.043083"; FPKM "0.259955"; TPM "0.437262"; 1 StringTie exon 426764 426798 1000 + . gene_id "Zm00001d027254"; transcript_id "Zm00001d027254_T001"; exon_number "1"; cov "0.000000"; 1 StringTie exon 426869 426970 1000 + . gene_id "Zm00001d027254"; transcript_id "Zm00001d027254_T001" $ grep 'FPKM' temp.txt | cut -f9 | sed -r 's/gene_id "([^"]*).*FPKM "([^"]*).*/\1\t\2/g' | sed '1i#gene_id\tFPKM' #gene_id FPKM Zm00001d027250 8.407302 Zm00001d027254 0.259955
biostars
{"uid": 347423, "view_count": 4053, "vote_count": 1}
Hello, I've heard of two different ways of doing batch effect correction: 1. Explicit: design(myData_collapsed) <- ~ Sex + Genotype dds <- DESeq(myData_collapsed) deseqOutput <- results(dds, alpha = 0.05, contrasts = c('Genotype', 'A', 'B')) 2. Implicit: # make a full model matrix mod <- model.matrix(~ Genotype, colData(myData_collapsed)) # make a null model to compare it to mod0 <- model.matrix(~ 1, colData(myData_collapsed)) # calculate SVs svseq <- svaseq( assay(myData_collapsed), mod, mod0, n.sv = 4) myData_collapsed$SV1 <- svseq$sv[,1] myData_collapsed$SV2 <- svseq$sv[,2] myData_collapsed$SV3 <- svseq$sv[,3] myData_collapsed$SV4 <- svseq$sv[,4] # redesign matrix design(myData_collapsed) <- ~ SV1 + SV2 + SV3 + SV4 + Genotype # DESeq dds <- DESeq(myData_collapsed) deseqOutput <- results(dds, alpha = 0.05, contrasts = c('Genotype', 'A', 'B')) I only ever see people doing one of the two strategies above. However, it seems to me that it should be possible to do both, just as long as you make sure to include the explicit batches in the design of your model matrix when calculating SVs, like so: # make a full model matrix mod <- model.matrix(~ Sex + Genotype, colData(myData_collapsed)) # make a null model to compare it to mod0 <- model.matrix(~ 1, colData(myData_collapsed)) # make SVs svseq <- svaseq( assay(myData_collapsed), mod, mod0, n.sv = 4) # new DESeq matrix design myData_collapsed$SV1 <- svseq$sv[,1] myData_collapsed$SV2 <- svseq$sv[,2] myData_collapsed$SV3 <- svseq$sv[,3] myData_collapsed$SV4 <- svseq$sv[,4] design(myData_collapsed) <- ~ SV1 + SV2 + SV3 + SV4 + Sex + Genotype # perform DESeq dds <- DESeq(myData_collapsed) deseqOutput <- results(dds, alpha = 0.05, contrasts = c('Genotype', 'A', 'B')) Is that correct? (Or - just realized this - should the null model include the explicit batch effect, e.g. mod0 <- model.matrix(~ Sex, colData(myData_collapsed)) ?) Thank you for your help! I appreciate all the help I've been getting from BioStars recently, and I hope I can do a good job at documentation for others Googling this issue.
The starting point to any RNA-seq experiment should be a well-designed study that negates the need for any form of batch adjustment / correction. This is often not the case, as we know. In the 'explicit' part, you are not actually doing any adjustment for batch. You've just got gender (sex) and genotype in the design model - are you implying that gender is your 'batch' effect for which you are aiming to adjust? Gender can be a confounding factor in mixed-gender / transgender RNA-seq studies, in particular for genes transcribed from chromosome X and / or those that have escaped chrX inactivation. In your 'implicit' part, with your code, you are looking for any residual batch (or other) effects in your data without knowing *a priori* what they may be. This type of adjustment should not be performed without good reason. Performing it without justification could lead to unnecessary adjustment of your coefficients and the obtaining of unrealistic *P* values during differential expression. It could be performed in a situation like this: our laboratory receives samples of minimal information and we are told to process them. We realise, after an initial test, that there are likely unknown batch effects, so, we employ SVA to model these effects and adjust for them (as you do). ----------------------------------------------- -------------------------------- # batch effect known When we know what is the batch effect, we should include it in the design formula, e.g.: design = ~ batch + sex + genotype This has the effect of modelling the known batch effect and making adjustments for it when it comes to differential expression analysis. No further adjustments from this are necessary unless you wish to use your normalised / transformed counts for other tools, downstream. In such a situation, there are 2 options: - use `removeBatchEffect()` (*limma*) on the transformed expression levels to subtract out the batch effect - set `blind=FALSE` when transforming by regularised log or variance stabilisation (*DESeq2* only) ( see here: http://bioconductor.org/packages/devel/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#blind-dispersion-estimation ) # batch effect(s) unknown If the batch effect(s) is/are unknown, we can use SVA, as you have done, in which case we arrive at: design = ~ SV1 + SV2 + Sex + Genotype *NB - include as minimal a number of surrogate variables in the model as possible; otherwise, you'll over-adjust* You can then later subtract out the modeled batch effect as above. ---------------------------------------------------- Yet other methods aim to directly adjust raw or normalised counts for either of the situations mentioned above. I go over this here in this previous answer: https://www.biostars.org/p/266507/#280157 Kevin
biostars
{"uid": 333597, "view_count": 4518, "vote_count": 2}
Hi all, I search the coordinates of PAR3 on chrX and on chrY. Wikipedia (please see link attached) tells me, that the genes involved are PCDH11X and TGIF2LX on X (@ ~ 90MB); PCDH11Y and TGIF2LY on Y (@ ~ 5 MB). The coordinates of these gene serve me as a rough orientation, but I search an 'official definition' for example from the UCSC or any other official instituion. Do you know of any? Thx, Best, Mathias. Wikipedia: http://en.wikipedia.org/wiki/Pseudoautosomal_region
<p>I did not know PAR3 until now. The claim of PAR3 is made by one paper last year which has only been cited once. In the paper, the author is calling XTR (X transposed region) as PAR3. I have not read through the paper, but I think the evidence described in the abstract is very weak. Correlated copy number changes are not sufficient to define PARs - ordinary segmental duplications are also associated with non-homologous recombinations sometimes. Biologically, a PAR in the middle of chrX is also weird. If there is a single crossover between chrX and chrY in PAR3, would that create a huge hybrid chrX-chrY chromosome? I would take PAR3 with a pile of salt.</p> <p>On the other hand, XTR is a well-known segmental duplication from chrX to chrY. You can find its approximate boundaries from UCSC self-chain:</p> <p>https://genome.ucsc.edu/cgi-bin/hgTracks?db=hg19&amp;position=chrX%3A87015838-93805574&amp;hgsid=386334885_AcoZWTXT6wlO6yJPaqfacRL0k0Cl</p> <p>Note that there are massive inversions and deletions between the chrX/chrY copies (there are papers about it, but I cannot find them now). It is not possible to precisely define the exact boundaries.</p> <p>EDIT: XTR evolution: &lt;http://www.nature.com/nature/journal/v423/n6942/extref/01722/figures/fig5.pdf&gt;</p>
biostars
{"uid": 110596, "view_count": 2968, "vote_count": 1}
Hello everyone, My question comes in 2 part. First, I have a R data frame of 1.3M human SNPs (all with a RS number) and would like to: - Update their genomic position (the position I have is based on an old hapmap reference, it would be nice to have it with Hg38 coordinates) - Annotate their effect (as in the CONTEXT column in the GWAS catalog, including for example : 3_prime_UTR_variant 5_prime_UTR_variant, downstream_gene_variant, intergenic_variant, intron_variant, missense_variant, non_coding_transcript_exon_variant , regulatory_region_variant and a few other categories) I looked at many R packages, including rsnps and BSgenome, but none of them was able to extract dbSNP information (only rsnps could extract their position but on a very small subset). I am aware SnpEff could (maybe, not sure) work with rsID but I would like to stick with R for that part. ---- Second question, I am using SnpEff on a vcf-type file of SNP list, and I would like to get the annotation on only the coding variants and for them whether if it is Synonymous or Missense/LoF for the non synonymous. I ran SnpEff and got a shitload of extra information and it's hard to make sense of it. So I used the filters -no-downstream -no-intergenic -no-intron -no-upstream -no-utr -no EffectType (low). Fore some reason, the output still contained a lot of the stuff I wasn't interested about, apparently the filters weren't the good ones. I anybody can help me with those questions, that would be great! I am starting to run a bit dry on the answers Google can tell me. Many thanks,
R code: rsid=c("rs123","rs150") library(biomaRt) ensembl_snp=useMart("ENSEMBL_MART_SNP", dataset="hsapiens_snp") getBM(attributes=c("refsnp_source",'refsnp_id','chr_name','chrom_start','chrom_end',"consequence_type_tv", "clinical_significance"), filters = 'snp_filter', values = rsid,mart = ensembl_snp) output: > getBM(attributes=c("refsnp_source",'refsnp_id','chr_name','chrom_start','chrom_end',"consequence_type_tv", "clinical_significance"), filters = 'snp_filter', values = rsid,mart = ensembl_snp) refsnp_source refsnp_id chr_name chrom_start chrom_end consequence_type_tv 1 dbSNP rs123 7 24926827 24926827 intron_variant 2 dbSNP rs150 7 24971033 24971033 intron_variant clinical_significance 1 NA 2 NA coordinates seem to match with dbSNP records.
biostars
{"uid": 285497, "view_count": 6185, "vote_count": 3}
Hi, I'm an undergraduate student. Please help me. I want mapping-bam-file from sra-dataset from NCBI in order to analyze heterogeneity of mouse ESC. The dataset is generated from a paper below.(GSE60749) *Roshan M.Kumar et al. Deconstructing transcriptional heterogeneity in pluripotent stem cells. Nature(2014)* Please teach me how to get adapter sequence used in this experimentation and what I should use for quality control(Prinseq?ShortRead?). For example, I want to try GEO Sample [GSM1486817][1] sra file. Can anyone give me process of quality control of this sra? I thank you for reading it through. Any help will be appreciated. [1]: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSM1486817
1. Download data 2. Convert from` .sra `format to .`fastq` with **SRA Toolkit**: http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=software 3. Check the quality using **fastQC** package: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/. In the generated report, you will see if your sequences have adapters and if so, their names. 4. Remove the adapters and bad quality sequences using: - **Trimmomatic**: http://www.usadellab.org/cms/?page=trimmomatic - **AdapterRemoval**: https://github.com/mikkelschubert/adapterremoval - **Cutadapt**: http://journal.embnet.org/index.php/embnetjournal/article/view/200/479 - ... and many other tools. I would suggest you Trimmomatic. These are the general steps that you should follow in order to perform a quality control of the raw data. Hope it helps.
biostars
{"uid": 143961, "view_count": 8318, "vote_count": 2}
Hi. I'm trying to do random sampling with R. When I run library(GenomicAlignments) library(rtracklayer) ## your bam file bam.file <- 'your_bam_file.bam' ## is it paired end or not paired <- TRUE ## read in the bam file if(paired){ bam.input <- readGAlignmentPairs(bam.file) }else{ bam.input <- readGAlignments(bam.file) } ## select your number of subsets num.samples <- 100 ## subsample WITH replacement bam.subsample <- sample(bam.input,num.samples,replace=TRUE) ## export new bam file export(bam.subsample,BamFile('bam_subsampled.bam')) I get the following error message [E::hts_idx_push] Region 554468587..554468738 cannot be stored in a bai index. Try using a csi index with min_shift = 14, n_lvls >= 6 value[[3L]](cond) でエラー: 'asBam' failed to build index file: bam_subsampled.bam SAM file: 'bam_subsampled.sam' 追加情報: 警告メッセージ: .make_GAlignmentPairs_from_GAlignments(gal, strandMode = strandMode, で: 79328 alignments with ambiguous pairing were dumped. Use 'getDumpedAlignments()' to retrieve them from the dump environment. I searched for csi option in Rsamtools or GenomicAlignmets, but I wasn't able to find that. How can I solve this error ?? Thanks.
Simply add `index = FALSE` to avoid building the index. `export(bam.subsample,BamFile('bam_subsampled.bam'), index = FALSE)` If you need the index you can create it with samtools at any time (`samtools index -c bam_subsampled.bam`) Also asked for some details over at the BioC forum, see what they say. https://support.bioconductor.org/p/126603/
biostars
{"uid": 408943, "view_count": 1044, "vote_count": 1}
Hi, I have been able to create my own linear workflows (A -> B -> C -> D) with Snakemake. However, now I would like to include some optional steps (X) that should be only executed if the user specifies it. Briefly, most of the times C will take as input the output from B, but sometimes I would need X to take as input the output from B, and then C to take the output from X. ![enter image description here][1] After looking at the documentation I have not been able to figure out how to do this. I don't even know if this is feasible, or if there is other approach that fits better. I would appreciate some guidance here. Thanks! [1]: https://i.ibb.co/3TkqHHx/Diagram.png
The input to `C` can be a function that defines files based on the wildcard for the output from `C`. You could write a function that decides what the input to C should therefore be. When you say that "sometimes I would need X to take as input the output from B and then C to take the output from X", do you mean that the optional use of X is decided for a given sample within your workflow (sample 1 might pass through X, but sample 2 might not need to), or that the whole workflow should optionally use X based on some config/argument (for a given experiment, choose to run all samples through X) Something like this # optionally use X for every sample passing through the workflow def input_for_c(wildcards): # requires a config containing switches for the whole workflow if config["Use X"]: return "./data/X/{}".format(wildcards["sample_id"]) else: return "./data/B/{}".format(wildcards["sample_id"]) # optionally use X for the current sample def input_for_c(wildcards): # requires a `sample_config` containing switches for each separate sample sample_id = wildcards["sample_id"] if sample_config[sample_id]["Use X"]: return "./data/X/{}".format(wildcards.sample_id) else: return "./data/B/{}".format(wildcards.sample_id) rule all: input: expand("./data/C/{sample_id}", sample_id = SAMPLES) rule B: output: "./data/B/{sample_id}" ... rule X: output: "./data/X/{sample_id}" ... rule C: input: input_for_c output: "./data/C/{sample_id}" ...
biostars
{"uid": 432785, "view_count": 2247, "vote_count": 2}
What is the difference between Phred and GQ(Genotype quality)? Which are the values for a good cut-off?
Phred scale is usually used for representing the base error probability in lograrithm format: $Phred score = -10 log_10 error probability$ genotype quality is how certian the genotype calling algorithm is about the genotype called. It is usually displayed in 10*log(relative_score_of_genotype) format too.
biostars
{"uid": 308538, "view_count": 966, "vote_count": 1}
<p>Hi,everybody,</p> <p>I find that the lastest version of gene in NCBI is GRCh38,I could find GRCh37 for on-line browser version. But I can not find the download version.In the download page, The only version is GRCh38. Anyone know where to download GRCh37 download files in NCBI?</p>
<p>under ftp://ftp.ncbi.nlm.nih.gov/genomes/Homo_sapiens/ARCHIVE/BUILD.37.3</p>
biostars
{"uid": 123519, "view_count": 17697, "vote_count": 2}
I am looking for a tool to de-duplicate FASTQ files based on UMI which are known per each read. The tool would likely pool identical/similar UMI and check for high similarity between the reads of each pool. The purpose is to reduce processing down the pipeline, in particular to allow mapping of rna-seq to transcriptome rather than to genome.
I know of no tool that can do does this, and there are probably good reasons for that. In the case of most protocols that use UMIs, **the UMI alone simply isn't unique enough to uniquely identify a pre-PCR molecule**. Consider: For deduplicating only on a UMI to work, it has to be far more likley that two reads with the same UMI are PCR duplicates than that two independent molecules got the same UMI. With a 10nt UMI there are 1 million different possible UMI sequences. A standard RNAseq library, for example, might contain around 30 million reads. But the situation is worst than this: UMIs can containing sequencing errors, thus sfotware like UMI-tools doesn't just assume that two reads with the same UMI sequences are duplicates, but that two reads with similar UMI sequences are duplicates. Finally usage of supposedly random UMI sequences is not actually random: some sequences are more likely to be used than others. Thus to distinguish duplicates from two reads that just happen to have got the same UMI sequence, we need more information. What information is appropriate depends on the protocol that created the data. Simply put, if PCR happens after fragmentation, then reads with different mapping co-ordinates are likely to have come from different molecules. This applies to techniques like iCLIP, 4C, ChIP-seq and standard RNA-seq. **In this case you might find that duplicates can the same complete read sequence, but the cDNA part and the UMI part, but you will missing things that simply have similar sequences do to a sequencing or PCR error**. In other cases fragmentation comes after PCR. In this case in this case two reads from the same original molecule can have different mapping co-ordinates, but there are limits: in 3' end tagging RNA seq (e.g. droplet based single cell RNA seq) two reads coming from different genes cannot be PCR duplicates. In amplicon sequencing, two reads from different amplicons cannot be from the same molecule. **In this case you cannot use the rest of the read to decide if something is a duplicate or not, and there really isn't anything you can do without mapping or transcript assignment of some sort** ## Solutions I strongly recommend that you follow standard workflows. Otherwise you might try: 1. If you are doing droplet based single-cell RNA-seq (e.g. 10X chromium or drop-seq) and are looking for a low resource way to process the data, you might like to try the newly released `alevin`, which takes fastqs and outputs genecounts, and does so with a 10x time and memory reduction compared to tools like Cell Ranger. It encorporates an UMI-deduplication algo inspired by UMI-tools, but which properly deals with transcript ambiguity. `alevin` is part of the lastest release of `salmon` and can be obtained at https://github.com/COMBINE-lab/salmon 2. **If your protocol has PCR and UMI addition only after fragmentation**, then the tool [`tally`][1] will de-duplicate identical fastq records. You should be able to just pass in your raw reads. But be aware that any sequencing or PCR errors will mark a read as not a duplicate when it is. 3. The latest versions of UMI-tools has an importable python module which you can use for implementing your own deduplication proceedure using UMI-tools' error aware barcode collapsing algorithm: from umi_tools.network import UMIClusterer # umis is a list of UMI sequences, e.g. ["ATAT", "GTAT", "CCAT"] # counts is a dictionary mapping these UMIs to counts, eg: # {"ATAT":10, "GTAT":3, "CCAT": 5} # threshold is the edit distance threshold at which to cluster. clusterer = UMIClusterer(cluster_method="directional") clusters = clusterer(umis, counts, threshold) # clusters is now a list of lists, where each sub list is a cluster of # umis we believe are PCR dupcliates. e.g. # [["ATAT", "GTAT"], ["CCAT"]] Use `alevin` if your data is of the right type, otherwise I do recommend the standard methods (i.e. not 1 or 2 above). With the exception of CellRanger, we find that actually processing the fastq file is the most time consuming part of the pipeline and that memory usage is dominated by the mapper, which is independent of the size of the input, so you will not gain much, in terms of time or memory, deduplicating before mapping anyway. CoI: I am the author of `UMI-tools` and an author on the `alevin` paper. [1]: https://www.ebi.ac.uk/research/enright/software/kraken
biostars
{"uid": 345081, "view_count": 9407, "vote_count": 3}
Dear Biostar Community, What is the best tool for read generation from given reference sequences or any that you use/used and it works/worked as expected? The properties of desired tool: - reads generate of given insert size and read length with uniform coverage - handle uniform coverage on circle genomes - handle large genomes without crashing - generation of Illumina reads - generation of PacBio reads - generation of reads without errors - generation of reads contaminated by technical sequences - generation of read-through adapters artefacts I used ART (http://www.niehs.nih.gov/research/resources/software/biostatistics/art/) but it looks like it uses too old model for Illumina.
[randomreads][1] from BBMap. <br>PacBio reads can be generated [this][2] way. [1]: http://seqanswers.com/forums/showthread.php?t=48988 [2]: https://sourceforge.net/p/bbmap/discussion/general/thread/25863b5d/#738f/91e5
biostars
{"uid": 186688, "view_count": 1909, "vote_count": 3}
Hi, I have one question, I need to get reads after filtration. Than normally I get health reads (health = reads after filtration, satisfy the conditions), but I need to get reads, which I remove. For example I set filtration to get reads with mapQ bigger than 10, I used `samtools view -bg 10 ...` and I get reads with mapQ bigger than 10, but I want reads with mapq lower than ten. Could you please help if it is possible. And other option, compare the raw bam file and filtrate bam file and get the difference between them. Could you help me pls or do you know some tool to do this?
Just use awk or another scripting language, for example `samtools view foo.bam | awk '{if($5<10) print $0}'`. You can make that a bit more complicated if you want to keep the header (useful for piping back into samtools for conversion to a BAM file), but that's enough to give you the idea. BTW, pysam is useful in general for more complicated tasks.
biostars
{"uid": 100611, "view_count": 2015, "vote_count": 1}
I have a vcf file (format VCFv4.0), generated by GATK pipeline starting from Illumina reads. I need to convert it to 23andme file format. Example of the 23andme format: # rsid chromosome position genotype rs4477212 1 82154 TT rs3094315 1 752566 TC rs3131972 1 752721 AA rs12124819 1 776546 AC I am having problems with plink2 `--recode 23 cannot be used with multi-char alleles`. Plink was recommended earlier here https://www.biostars.org/p/198034/#274523 I tried then to modify the vcf to remove multi-char alleles using [VcfMultiToOneAllele,][1] which did a great job but the output file, even though it looks like a vcf, it was not recognised as such by plink2 `no genotype data in .vcf file`. Any other tool up to the task? Thanks for any help. [1]: https://lindenb.github.io/jvarkit/VcfMultiToOneAllele.html
OK, it seems I have solved it using: plink2 --vcf [vcf file] --snps-only --recode 23 now thinking how to include single point deletions and insertions to the output file, because those are missing
biostars
{"uid": 274804, "view_count": 12277, "vote_count": 1}
I have a data frame of omics data. Gene ids in rows (931), and samples in columns (15). > dim(my_data) # (rows columns) [1] 931 16 I created heatmap using `library(gplots)` cn=colnames(gdf1)[c(13:15,1:12)] col <- colorRampPalette(c("red","yellow","darkgreen"))(30) heatmap.2(as.matrix(gdf1[,cn]), dendrogram = "row", Colv = FALSE, Rowv = TRUE, scale = "none", col = col, key = TRUE, density.info = "none", key.title = NA, key.xlab = "Abundance", trace = "none", margins = c(7, 15)) <a href="https://ibb.co/C94gqgF"><img src="https://i.ibb.co/3FV2w2H/Rplot03.png" alt="Rplot03" border="0"></a><br /> However, in the heatmap, I see only few genes. Since it has 900 ish genes. How can I export what are the clustered genes in each cluster in the same order as in the heatmap? Also, how can I reduce the size of colorkey? Thank you.
**Edit 13th September, 2019:** To additionally see how to extract clusters of genes from the heatmap dendrogram, zoom down to this later comment: https://www.biostars.org/p/398548/#398618 --------------- --------------- Hello, First, create random data ---- mat <- matrix(rexp(200, rate=.1), ncol=20) rownames(mat) <- paste0('gene',1:nrow(mat)) colnames(mat) <- paste0('sample',1:ncol(mat)) mat[1:5,1:5] sample1 sample2 sample3 sample4 sample5 gene1 0.6247039 3.020142 8.303563 6.482744 0.59547154 gene2 2.6650871 3.375123 5.778222 19.410709 0.07966728 gene3 4.6343755 5.491166 8.716883 9.490372 29.03157875 gene4 13.6086878 3.632815 10.688699 1.263853 2.54216953 gene5 2.4060078 14.283380 8.592085 3.998141 0.25853135 Generate a heatmap and save it to `out` --- out <- heatmap.2(mat) <a href="https://ibb.co/nDQhSKW"><img src="https://i.ibb.co/dQknyHh/fff.png" alt="fff" border="0"></a> Obtain list of genes, ordered as per heatmap (from bottom, up): --- rownames(mat)[out$rowInd] [1] "gene2" "gene9" "gene7" "gene8" "gene4" "gene5" "gene3" "gene6" [9] "gene1" "gene10" Plot the row dendrogram on its own: ------ plot(out$rowDendrogram) <a href="https://imgbb.com/"><img src="https://i.ibb.co/8MSKBsX/hhhhh.png" alt="hhhhh" border="0"></a> Change colour key size ------- Use `keysize` parameter ---------------------------------- ----------------- See also here for pheatmap: https://www.biostars.org/p/287512/#287518 Kevin
biostars
{"uid": 398548, "view_count": 12798, "vote_count": 1}
Hello, I have a set of 5kb sequences upstream of a gene from different primates, and I would like to know what motifs are in enhancers and promoters of primates vs the 5kb upstream sequences of a set of the same gene but for rodents. Would the analysis with MEME suite be correct with the "Discriminative mode", and putting the rodent sequences in control sequences and the primate sequences in the primary sequences? To contrast, since the normal method looks for similarities. If not, how would this analysis be carried out? Thanks!
I think the discriminative mode is more to look for enrichment of the same motif. But different species may have the same motif with different information values. So one thought is, given sets of orthologous genes, you might perhaps perform a MEME scan for conserved motifs over their respective upstream promoter sequences. Pairs of per-species MEME PWMs could be compared with TOMTOM to query for significantly similar motifs. Putting results into a grid could help with comparing all species pairings at a glance. Another thought is to use existing (MEME-formatted) PWMs with FIMO to generate whole-genome scans for published motifs, comparing hits over ortholog promoter regions.
biostars
{"uid": 9488349, "view_count": 730, "vote_count": 3}
Hi, I have a set of samples and log10 total read counts of them in a data frame like below > head(a) log10_total_counts A1 6.468503 A10 6.565213 A11 6.752139 A12 5.078598 A2 6.277342 A3 6.473411 > ![enter image description here][1] [1]: https://i.ibb.co/P6mmKc5/Picture1.png For instance red dots have less than 1000000 counts and black dots have more thsn 1000000 counts. How could I repreduce this plot please?
another plot in ggplot2 with simulated data: df=data.frame(genes=paste0("gene_", seq_along(1:75)), reads=round(rnorm(75, 110,2),0)) library(ggplot2) ggplot(df,aes(genes, reads, color = ifelse( reads < 110, "Fail", "Pass")))+ geom_point() + scale_y_continuous(limits = c(0, 200))+ scale_color_manual(name="QC", values = c("red","darkgreen"))+ geom_hline(yintercept = 110, color="red")+ theme_bw()+ xlab("Sample")+ labs(y=expression("log"[10]("Total reads")))+ theme(axis.text.x = element_text(angle = 45,hjust = 1), legend.position = "bottom") <a href="https://ibb.co/LZnZC0b"><img src="https://i.ibb.co/vjZjJcg/Rplot01.png" alt="Rplot01" border="0"></a>
biostars
{"uid": 361073, "view_count": 9471, "vote_count": 1}
I have a list of >100 gene names and want to get the `locus_tag`s. Is there a way I can do this using its genbank file and Biopython? For example, the genbank file has ``` /gene="murE" /locus_tag="BSUW23_07815" ``` If my list just has `murE`, I'd like it to print out the corresponding `BSUW23_07815`
Something like this should work: ``` from Bio import SeqIO ​genbank_file = "example.gbk" # insert your filename here wanted = ["murE", ...] # or load all your 100 genes from a file for record in SeqIO.parse(genbank_file, "genbank"): for f in record.features: if f.type == "CDS" and "gene" in f.qualifiers: gene = f.qualifiers["gene"][0] if gene in wanted: print f.qualifiers["gene"][0], f.qualifiers["locus_tag"][0] ``` See also http://www.warwick.ac.uk/go/peter_cock/python/genbank/
biostars
{"uid": 110284, "view_count": 8617, "vote_count": 5}
Hello all, I have about ~150 sub directories in a directory. Each of those subdirectories contains multiple fasta files. I want to rename all the fasta headers with adding the name of the file. I tried using this command: for i in ./*mg/*.faa; do gawk -i inplace '/>/{gsub(">","&"FILENAME"_");gsub(/\.faa/,x)}1' $i; done This command however renames the fasta headers with sub-folder name (*mg) and the filename. How do I modify this command to just include the filename??
find /*mg -type f -name "*.faa" | while read F; do sed "/^>/s|\$| $(basename $F)|" "${F}" > "${F}.new" ; done
biostars
{"uid": 9542325, "view_count": 542, "vote_count": 1}
<p>Hello, I have a text file likes this:</p> <pre><code>GTCAAAGATCAGATGGTTGTAGATGTGTGGTGTTATTTCTGAAGCCTCTGTTCAAGGAAGGCAGTTTCTTATGAA </code></pre> <p>I want to convert it to a fasta file. Can we just manually add a ">" in the fist line?</p> <p>Thanks </p>
<p>A good explaination of the FASTA format is here: </p> <p><a href='http://en.wikipedia.org/wiki/FASTA_format'>http://en.wikipedia.org/wiki/FASTA_format</a></p> <p>Some applications will use the word immediately after the ">" symbols as an "ID" so make sure it means something to you. You that you can put multiple sequences in the one file, just start each new one with a ">" line, and use a unique ID for each one.</p>
biostars
{"uid": 14120, "view_count": 117492, "vote_count": 2}
In the past when I've ran Abyss, I've given it multiple sequencing libraries from multiple input fastq.gz files (~36 separate files). In this current run, I've concatenated all my sequencing libraries together into two huge fastq.gz files (one for each of the pair). The main reason was that I performed digital normalization and the output was two huge concatenated files. The issue now is that I am finding Abyss to be taking a lot longer to finish assembling these normalized reads (about 1/3 of the original size). From reading the abyss logs, I noticed in the past there was parallel reading of multiple input fastq.gz files. Is it possible that since I just have two huge files now, I am not taking advantage of this parallel reading? And a significant amount of time is being used just to read in files? In my past runs with multiple inputs, Abyss would finish reading in the reads in 6-8 hours (I determine this by check when the pruning started to get logged). It has been almost a full day now with the current run and it still haven't finished reading in the reads. I am running Abyss on an ad hoc AWS cluster (starcluster) with 20 nodes. In past runs with multiple input files, I've used abyss 1.52. In this current run with two huge files, I am using abyss 1.9. Could it be a difference between versions also?
> From reading the abyss logs, I noticed in the past there was parallel reading of multiple input fastq.gz files. Is it possible that since I just have two huge files now, I am not taking advantage of this parallel reading? And a significant amount of time is being used just to read in files? Yes. This issue is on our radar. Feel free to open a feature request issue on GitHub. > In past runs with multiple input files, I've used abyss 1.52. In this current run with two huge files, I am using abyss 1.9. Could it be a difference between versions also? No. Cheers, Shaun
biostars
{"uid": 150328, "view_count": 3370, "vote_count": 1}
<p>Hello all, </p> <p>I've read several papers estimating the quality of ChIP-seq and most estimate percentage of peaks that include the consensus sequence at about 50%. Having lower FDR does not improve this very dramatically (to 70% maybe). </p> <p>So what are the high confidence peaks that do not contain the target motif? What causes them? Is there a systematic study of any sort? </p> <p>I'd be grateful for any pointers. </p> <p>cheers </p>
<p>Some possible explanations could be:</p> <ol> <li>Secondary binding/protein complexes: Another protein is binding the DNA and your ChIP:ed protein is binding to that protein.</li> <li>Generally open chromatin: an active regulatory region (perhaps "unlocked" by a "pioneering TF") is easier to bind to in the absence of a motif.</li> <li>Degenerate motif: there is actually a binding motif but it is relatively dissimilar from the consensus</li> <li>Artefactual regions, see e g the "black list" here: <a href='https://sites.google.com/site/anshulkundaje/projects/blacklists'>https://sites.google.com/site/anshulkundaje/projects/blacklists</a></li> </ol> <p>Since you have an FDR it seems you have a control library so the following may not apply to your case but is interesting in general: even in negative control libraries, regions like transcription start sites give robust peaks: <a href='http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005241'>http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005241</a></p>
biostars
{"uid": 97779, "view_count": 2344, "vote_count": 1}