INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
|---|---|---|---|
Hi all
Recently, I deal with some human gene data and I want to know if the gene I handled is conserved gene.
Is any database provide human genes' conservation scores?
I need all human genes' conservation scores.
I know UCSC Genome Table Browser provide sequence conservation scores like phastCons and phyloP. But I can't find gene's conservation score.
Does anyone can teach me how to get it?
The format which I want is
```
Gene ID 1 (Ensembl ID or Refseq ID ) | conservation score
Gene ID 2 (Ensembl ID or Refseq ID ) | conservation score
```
Thanks
|
<p>I think it depends on exactly what information you want. Is it conservation between human and a particular species? The UCSC conservation scores give you a measure of conservation based on an alignment between human and 45 other species (here is the <a href='http://hgdownload.cse.ucsc.edu/goldenPath/hg19/phastCons46way/'>information along with download links</a>), which is good for a general impression of conservation levels, but less good for a specific question like "how different is this gene in the marmoset?" (for which you'd want a specific pairwise alignment). They also have a subset of scores for primates, and for placental mammals. So, one way to do it would be to download the data, then process it to extract the answer you're looking for (e.g. get the average phastCons score using the exons, or just coding sequence, of each human gene, then use that average as your gene conservation score). What you mean by "gene" would also depend on the question you were trying to answer - for example, introns may or may not be of interest to you. </p>
|
biostars
|
{"uid": 67942, "view_count": 7836, "vote_count": 4}
|
Samtools can be used to select reads above certain mapping quality.
samtools view -h -b -q 30 aligned.bam -o above.mapQ30.bam
**But, how to select a read below certain mapping quality - all aligned reads below mapQ 30?**
I know it can be done using awk. But, the pipeline gets lengthy and time consuming when first need to convert bam to sam - separate header - use awk for mapQ below 30 - add header - sam file - convert to bam.
Really, its taking a lots of time.
Thanks,
|
# Method B - requires samtools version 1.3
**first select the mapped reads only**
`samtools view -b -h -F 4 aligned.bam > mapped_only.bam` # this selects for mapped reads only.
`samtools index mapped_only.bam` # index the bam file
**Now separate the mapped reads above mapQ 40 and below**
`samtools -b -h -U below_mapQ40.bam -q 40 mapped_only.bam -o above_mapQ40.bam` # make sure to use right samtools version (1.3)
`samtools index above_mapQ40.bam` # this is bam file with reads with mapQ 40 and above
`samtools index below_mapQ40.bam` # this is bam file with reads with mapQ below 40.
|
biostars
|
{"uid": 211541, "view_count": 12616, "vote_count": 3}
|
Hello,
I haven't worked with miRNA mapping..( the specie is mouse) It is the first time to do it.
I have read a lot of posting and web-site to map my microRNA data.
Here is what I follow.
**1) trim the adapter using cutadapt**
When I looked at my fastq file (single-end), read length was 36 bp.. by inspecting the eye-ball test, I could not find my common sequences. (So, I am not sure my fastq file has been already trimmed or not . So I kept both versions... )This is my command... (comes from http://www.ark-genomics.org/events-online-training-eu-training-course/adapter-and-quality-trimming-illumina-data)
```
cutadapt -a CGACAGGTTCAGAGTTCTACAGTCCGACGATC \
-a TACAGTCCGACGATC \
-a ATCTCGTATGCCGTCTTCTGCTTG \
-e 0.1 -O 5 -m 15 \
-o xxx_microRNA_adaprm.fastq xxx_microRNA.fastq
```
So, I have 2 version fastq files. (`xxx_microRNA.fastq`, `xxx_microRNA_adaprm.fastq`)
**2) Local bowtie2 alignment of miRNA data**
(I generated the reference with bowtie-build directly from the hairpin FASTA file downloaded from miRBase.)
```
Bowtie2-build hairpin_mms.fa hairpin_mms.fa
Bowtie2-build mature_mms.fa mature_mms.fa
Bowtie2 -local -N 1 -L 16 -x hairpin_mms.fa -U fastq/xxx_microRNA(_adaprm).fastq -S xxxx.sam
```
The bowtie2 mapping returns the following result.. (for both versions)
I should have done something wrong...
```
2849144 reads; of these:
2849144 (100.00%) were unpaired; of these:
2847350 (99.94%) aligned 0 times
933 (0.03%) aligned exactly 1 time
861 (0.03%) aligned >1 times
0.06% overall alignment rate
```
Could you please help me with this?
Oops, I forgot to convert U to T in hairpin.fa file.. (so stupid about it..) Once I run it, I will confirm the result again.
|
Try to clip the adapter of the Illumina HiSeq 2000 miRNA protocol (TGGAATTCTCGGGTGCCAAGGAACTCCAGTCAC)
And always check your read length distribution after clipping. There should be a peak at around 24nt.
|
biostars
|
{"uid": 144345, "view_count": 8279, "vote_count": 8}
|
<p>I often see the word <strong>depth</strong> in the manuals of the tools for NGS, what is its meaning ?</p>
<p>thanks.</p>
|
<p>I believe it is the same concept as <a href="http://en.wikipedia.org/wiki/Shotgun_sequencing#Coverage">coverage</a>, it might come from a shorthand of saying <strong>depth of coverage</strong>.</p>
|
biostars
|
{"uid": 638, "view_count": 163360, "vote_count": 46}
|
Hello,
My question is - How many mismatches are allowed (on average/or hard limit) in a bowtie 2 alignment before it moves on to find a new home for a read. I know the we can control the number of mismatches allowed in the seed with -N option. And I know that with the -D option, I control the seed extension attempts that "fail".
So I guess my question now is, when does an extension "fail"? The manual says, when an extension does not yield a new best or a new second-best alignment, it fails.
I'm not able to map this definition to a number (of mismatches). I've re-read the method to calculate the alignment score many times now. Would two consecutive SNPs be allowed, or do two SNPs need to have a certain number of matched bases between them so that the overall alignment doesn't "fail"?
I think, in my naive way, I've explained the core issues of my question. I'll edit if it's not entirely clear.
Thanks,
Ayush
|
<p><code>-D</code> and <code>-N</code> aren't directly related to the number of mismatches. An alignment fails, if it does not meet the <code>--score-min</code> criteria.</p>
<p>For global alignment costs are <code>--ma 0</code>, <code>--mm 2-6</code> - depending on base quality. Score is given by <code>--score-min L,-0.6,-0.6</code>. Given a 100 bp read the <code>--score-min</code> computes to -60. Given only high quality SNPs (<code>--mm 6</code>), an alignment can have up to 10 mismatches and still meet the score threshold. However, with 10 mismatches randomly distributed along the read, you will need sensistive seeding parameters to initialize the alignment in the first place.</p>
<p>To limit bowtie2 to 2 mismatches<em> per 100 bp</em> in global mode, you can set <code>--score-min L,0,-13</code>. To limit reads of <em>arbitrary length</em> to 2 mismatches, you can set <code>--score-min C,-13,0.</code></p>
<p>Given default gap costs (5,3), this would also allow 2 non-adjacent 1 bp gaps or one 3 bp gap in your alignment.</p>
|
biostars
|
{"uid": 150246, "view_count": 3410, "vote_count": 1}
|
I have a VCF file containing genotype date for a few thousand SNPs across a few thousand samples. I would like to firstly convert this to a matrix (possibly using the VariantAnnotation package) and then perform a PCA analysis on the samples followed by some sort of clustering algorithm. I have very little experience with any of SNP matrix packages, PCA or clustering algorithms so I was wondering if anyone knew of any good tutorials which may be able to help me.
It is also worth noting that due to the nature of the analysis I am running, the SNP matrix will be extremely sparse. I would therefore also like to get information on the fraction of missing genotypes for each sample and the fraction of missing samples for each SNP - is this possible?
|
To pursue PCA you can use [SNPRelate][1] or [TASSEL][2] that has a user-frendly platform.
[1]: http://corearray.sourceforge.net/tutorials/SNPRelate/
[2]: http://www.maizegenetics.net/tassel
|
biostars
|
{"uid": 320253, "view_count": 1588, "vote_count": 2}
|
I'd be interested in retrieving via Biomart the protein length associated to a transcript, as per screenshot below
![scrennshot][1]
I have retrieved the list of attributes available from biomart but can't seem to find the right field.
Can someone confirm if this info is accessible programatically ? how ?
Thanks,
[1]: /media/images/f933ce1c-b0bb-453f-9f40-48e0117a
|
Hi iatz,
You can't retrieve protein length directly from BioMart. However, you can retrieve the CDS length and divide by 3.
![enter image description here][1]
[1]: /media/images/c887b1d6-5ce8-4e04-8274-37098c0b
|
biostars
|
{"uid": 9523545, "view_count": 642, "vote_count": 1}
|
Hello, everyone
Someone can help-me at to access the tranche repository (Univ. of Michigan) for MS data acquisition?
The URL (https://www.proteomecommons.org/tranche/) does not work.
When I try to access, I'm redirected to another page.
Best regards,
|
The Proteome Commons Tranche sadly shut down around 2013 due to lack of funding. It is a good example why important bioinformatics infrastructure needs dedicated funding and should not have to operate out of research grants for individual groups.
|
biostars
|
{"uid": 228388, "view_count": 4705, "vote_count": 2}
|
Hi All,
I have been using DESeq2 to perform differential gene expression analysis between two groups but I do not have house keeping genes in the data set. Is it still possible to do DESeq calculations without the house keeping genes? and I do not have all the genes expressed across all subjects/samples.
I have the following error message calculating size factors:
> dds <- DESeq(dds)
>estimating size factors
>Error in estimateSizeFactorsForMatrix(counts(object), locfunc = locfunc, :
>every gene contains at least one zero, cannot compute log geometric means
Any help would be appreciated.
Thank you.
Gajender
|
DESeq2 by default doesn't use housekeeping genes. It makes the assumption that most genes are not diferentially expressed, and corrects based on the gene with the median expression
But your problem is exactly as it says; the default normalization method calculates the geometric mean of each gene, that is not possible when a gene has a sample with zero counts. Some genes with some zeroes are fine, the normalization function just ignores them, but in your data, every single gene is being excluded for that reason. The default method of normalization will not work with your dataset as is. If you have a few samples with very few reads, omitting them might fix the problem.
|
biostars
|
{"uid": 286857, "view_count": 3747, "vote_count": 2}
|
Hi All,
I am working with a viral sequence. Upon investigating the freebayes output vcf file I found that there are two positions where variants are reverse complement of the reference.
Reference 527 . CCCGGGCGTCGGGCGAC GTCGCCCGACGCCCGGG 0 . AB=0;ABP=0;AC=0;AF=0;AN=1;AO=1837;CIGAR=2X2M2X2M1X2M2X2M2X;DP=8277;DPB=9767.53;DPRA=0;EPP=3415.19;EPPR=9687.22;GTI=0;LEN=17;MEANALT=29;MQM=40.3887;MQMR=39.4471;NS=1;NUMALT=1;ODDS=65195.1;PAIRED=0.997278;PAIREDR=0.968095;PAO=0;PQA=0;PQR=137846;PRO=4415;QA=0;QR=230712;RO=6394;RPL=63;RPP=3463.56;RPPR=11044.7;RPR=1774;RUN=1;SAF=1831;SAP=3940.06;SAR=6;SRF=6101;SRP=11459.1;SRR=293;TYPE=complex GT:DP:DPR:RO:QR:AO:QA:GL 0:8277:8277,1837:6394:230712:1837:0:0,-30803.9
and
Reference 5586 . GTCGCCCGACGCCCGGG CCCGGGCGTCGGGCGAC 1.19962e-12 . AB=0;ABP=0;AC=0;AF=0;AN=1;AO=1439;CIGAR=2X2M2X2M1X2M2X2M2X;DP=6715;DPB=8094.82;DPRA=0;EPP=2732.86;EPPR=7945.83;GTI=0;LEN=17;MEANALT=27;MQM=39.3489;MQMR=38.6015;NS=1;NUMALT=1;ODDS=54869.5;PAIRED=0.994441;PAIREDR=0.963359;PAO=0;PQA=0;PQR=125375;PRO=3860;QA=0;QR=188745;RO=5240;RPL=1401;RPP=2806.41;RPPR=9203.98;RPR=38;RUN=1;SAF=9;SAP=3050.08;SAR=1430;SRF=271;SRP=9149.39;SRR=4969;TYPE=complex GT:DP:DPR:RO:QR:AO:QA:GL 0:6715:6715,1439:5240:188745:1439:0:0,-25889.3
When I visualized the alignment in IGV the variant was present at the specified location. This is visualization with IGV.
![enter image description here][1]
How can I know for sure that,
1. Is this variant a sequencing or mapping artifact or an inversion
variant?
2. Is visualization in IGV a correct method of validating a
variant?
[1]: https://s6.postimg.cc/6jwcjicg1/position527.png
|
Hello,
> Is this variant a sequencing or mapping artifact or an inversion
> variant?
this is very likely some kind of artifact. Have a look at the 6th column of your vcf. This is a quality value for the variant site. Your value is 0 or something very close to it. Normally this value should be at least something around 20. Depending on the read depth this value is much higher.
Also freebayes gives you the genotype `0` which means `REF`
> Is visualization in IGV a correct method of validating a variant?
For a quick check this is absolutely fine. In this case you can also see that the bases which doesn't match the ref are faded. This normally means they have bad quality values.
For a real validation, each variant that has an impact on your goal, has to be confirmed by another method like sanger sequencing.
fin swimmer
|
biostars
|
{"uid": 322605, "view_count": 2383, "vote_count": 2}
|
Hello, everyone.
I want to download paired samples from the TCGA database of RNA-Seq experiments. I've been looking for information on how to download this data and it looks like it's from https://gdac.broadinstitute.org/. Specifically, I'm looking for breast cancer samples, so I looked in the mRNASeq section.
Inside this section there are several files to download (I want the raw counts) but I don't know what the differences are between the files:
illuminahiseq_rnaseqv2-RSEM_genes (MD5)
illuminahiseq_rnaseq-gene_expression (MD5)
I would also like to know how to filter these files to keep the paired samples. I found something about the sample codes in a previous post, but I couldn't access the link to the explanation (https://wiki.nci.nih.gov/display/TCGA/TCGA+barcode).
Could you help me with this problem please?
PS: Suggestions about other databases are welcome!
|
In your file manifest that you used to download the data, you will have a UUID. Use that to look up the TCGA barcode for each file via this function: https://www.biostars.org/p/306400/#306517
You can then infer Tumour-Normal pairings by matching on the TCGA barcode. Yet more information on barcodes:
- https://www.biostars.org/p/313063/#313066
- https://gdc.cancer.gov/resources-tcga-users/tcga-code-tables
- https://github.com/kevinblighe/TCGAbarcode/blob/master/TCGAbarcode.pdf
I have already analysed the TCGA BRCA data many times and there are ~111 Tumour-Normal pairs for the RNA-seq data.
Kevin
|
biostars
|
{"uid": 331019, "view_count": 2472, "vote_count": 1}
|
I tried to use BWA MEM to map reads from an interleaved FASTQ.
fastq="all.fastq"
fasta="/share/PI/apps/bcbio/genomes/Hsapiens/GRCh37/seq/GRCh37.fa"
bwa="/share/PI/apps/bcbio/anaconda/bin/bwa"
nThreads="12"
#Run BWA MEM
#IMPORTANT: NEED -p since "$fastq" is an interleaved fastq
readGroup="@RG\tID:CHM1\tSM:CHM1\tPL:Illumina"
sam="CHM1.sam"
"$bwa" mem -R "$readGroup" -t "$nThreads" -p "$fasta" "$fastq" -o "$sam"
(The FASTQs are CHM1; I used `prefetch` to fetch `.sra` files from three different runs from NCBI, then used `fastq-dump` to convert the SRAs to FASTQs, then `cat`ed them all together into one FASTQ.)
The SAM is 515Gb but has no obvious problems. `samtools quickcheck` says it's valid. But when I run GATK4's `FixMateInformation` or `ValidateSamFile`, I get output like this
ERROR: Record 1, Read name ######################################################################################################################################################################################################, Zero-length read without FZ, CS or CQ tag
WARNING: Record 1, Read name ######################################################################################################################################################################################################, QUAL field is set to * (unspecified quality scores), this is allowed by the SAM specification but many tools expect reads to include qualities
ERROR: Record 421522661, Read name ######################################################################################################################################################################################################, Zero-length read without FZ, CS or CQ tag
There may be even more errors, but this is what I got after two hours.
I can, in fact, see that the first line in the SAM file is
###################################################################################################################################################################################################### 4 * 0 0 * * 0 0 * * AS:i:0 XS:i:0 RG:Z:CHM1
Is this SAM really invalid? Or is there something I need to do so GATK4 will accept it?
|
`fastq-dump` will create file names and output reads to those files, so don't use redirection to stdout (`>`) - if you want to use it, then you have to use `-Z`:
-Z|--stdout Output to stdout, all split data become
joined into single stream
There is a working example of fastq-dump for the very same dataset you are using here:
https://www.biostars.org/p/120933/
fastq-dump --origfmt -I --split-files --gzip SRR1514950.sra
This will output two files one for R1 and other for R2, so you will need to modify your bwa mem command.
|
biostars
|
{"uid": 329032, "view_count": 1782, "vote_count": 1}
|
Hi all. I'm working on Python script that find ORF for a sequence, within a function of code. I hope this is well explained enough to follow:
It works so far if there is a start and stop codon found, or just stop. However if there is a start but no stop codon I run into a problem with one of my sequences, the return from the function is "None".
Here's what I have thus far (might be indent issues from pasting):
def find_orf(sequence, gb):
start_pos = sequence.find('GCCGCCACCATG')
print "START " + str(start_pos)
if start_pos >= 0:
s_to_ATG = int(start_pos) + 9
start = sequence[s_to_ATG:]
for i in xrange(0, len(start), 3):
stops =["TAA", "TGA", "TAG"]
codon = start[i:i+3]
## start code block with issue
if codon in stops:
orf = start[:i+3]
else:
orf = start
## end code block with issue
return orf, start_pos
elif start_pos < 0:
stop_pos = sequence.find(str(gb[-12:]))
begin_to_stop = int(stop_pos) + 12
return sequence[:begin_to_stop], start_pos
else:
print "Error: There is no open-reading frame for this sequence!"
I've highlighted what is giving me issues in the code. It seems to always go to the "else" statement. The first sequence I have, I know for sure it has a START and STOP. The second sequence does not have a STOP, but has a START. So, I want it to print START to the end of the sequence, which in the code is the variable "start".
If I change this IF statement in the code to this:
for i in xrange(0, len(start), 3):
stops =["TAA", "TGA", "TAG"]
codon = start[i:i+3]
if codon in stops: return start[:i+3], start_pos
it prints the first sequence start to stop correctly, and the second is return as "Nonetype", because it has no STOP. It may be something simple that I'm overlooking, but I was hoping someone could see something wrong with the first code example, so that if there is a start and stop, it will print it correctly, and if there is no stop will print start to the end. (The second part of the code works where if there is no start, it prints from the beginning to the stop)
All help is appreciated.
|
You just need to add a break:
def find_orf(sequence, gb):
start_pos = sequence.find('GCCGCCACCATG')
print "START " + str(start_pos)
if start_pos >= 0:
s_to_ATG = int(start_pos) + 9
start = sequence[s_to_ATG:]
for i in xrange(0, len(start), 3):
stops =["TAA", "TGA", "TAG"]
codon = start[i:i+3]
if codon in stops:
orf = start[:i+3]
break ##Added a break statement here
else:
orf = start
return orf, start_pos
elif start_pos < 0:
stop_pos = sequence.find(str(gb[-12:]))
begin_to_stop = int(stop_pos) + 12
return sequence[:begin_to_stop], start_pos
else:
print "Error: There is no open-reading frame for this sequence!"
Without the break, you'll always iterate to the end of the sequence.
|
biostars
|
{"uid": 116405, "view_count": 12537, "vote_count": 1}
|
<p>I'm working on microarray data of GEO. Many data are in .txt format. how can I use them for analysis. Is there any way to convert .txt format to .CEL format?</p>
|
To answer your question, no, there is not a way to convert .txt to .cel.
|
biostars
|
{"uid": 154211, "view_count": 4581, "vote_count": 1}
|
Hi
I'm having a problem with a whole-exome-sequenced dataset consisting of about 400 human subjects, 200 cases with a certain disease, and 200 controls without. The dataset has been through a rigorous quality control (standardised QC in plink with HWE, IBD, missingness, sex-check ++, along with HapMap population stratification and Eigenstrat/PCA-analysis). I´m using plink to do a basic association-analysis for all variants between cases and controls, and while the resulting QQ-plot for the common (MAF > 0.01) variants is OK, the plot for the rare (MAF < 0.01) variants is less so. Below are the three QQ-plots for all, common and rare variants along with lambda-values:
- QQ-plot all variants, lambda 1.83
http://postimg.org/image/oyr53wchr/
- QQ-plot common variants, lambda 1.04
http://postimg.org/image/6lqjtc20v/
- QQ-plot rare variants, lambda 2.43
http://postimg.org/image/ic4haputb/
The main problem seems to be the positive deviation (observed > expected) of the rare variants in the first part of the plot, causing the lambda to be very big, both for the QQ-plot for rare and all variants. I am wondering what could be the cause of this behaviour for the rare variants, and also what the implications this has for the prospects of doing analysis on rare and common variants together.
I would be grateful if anybody has any experience in these matters and could provide some input.
Many thanks.
|
I would generate p-values using an exact test, and see if there is still inflation, personally.
|
biostars
|
{"uid": 194229, "view_count": 3504, "vote_count": 7}
|
Hello,
I want to filter my non-variant positions for mapping quality. I used mpileup to output the mapping qualities across all sites. To do the filtering, can I take an arithmetic mean of the mapping qualities for each read that the base belong to?
For example:
NW_008793873.1 13 G 2 .^S. AB HS
Position 13 is covered by two reads with mapping qualities **H** and **S**. Would **(39 + 50)/2 = 44.5** be correct?
I ask this because in connection to mapping qualities, usually root mean square is mentioned so I was wondering what would the correct approach be in this case?
Thank you!
|
These are probabilities of mismapping on a PHRED scale. For the first one, the probability of mismapping is:
(10^(-(39/10)) = 0.0001258925
For the second it is:
10^(-(50/10)) = 1e-05
So on average, your probability of mismapping is:
(0.0001258925+1e-05)/2 = 6.794627e-05
On a PHRED scale it is:
-10*log10(6.794625e-05) = 41.67835
|
biostars
|
{"uid": 332075, "view_count": 2393, "vote_count": 1}
|
Hello guys. I met some problems when I tried to follow the tutorial of Nanopolish in https://nanopolish.readthedocs.io/en/latest/quickstart_eventalign.html
One of the steps is to use samtools to sort alignments. Here I got a reads.bam file, and try to sort it into reads.sorted.bam file. My environment is Utubun and the version of samtools is 1.9 (the most recent one).
I run the following command
samtools sort -o reads.sorted.bam reads.bam
Error: samtools sort: couldn't allocate memory for bam_mem
Folowing the suggetion from https://github.com/samtools/samtools/issues/831
Then I try to run
samtools sort -m GALAXY_MEMORY_MB / GALAXY_SLOTS -o reads.sorted.bam reads.bam
Output information:
Usage: samtools sort [options...] [in.bam]
Options:
-l INT Set compression level, from 0 (uncompressed) to 9 (best)
-m INT Set maximum memory per thread; suffix K/M/G recognized [768M]
-n Sort by read name
-t TAG Sort by value of TAG. Uses position as secondary index (or read name if -n is set)
-o FILE Write final output to FILE rather than standard output
-T PREFIX Write temporary files to PREFIX.nnnn.bam
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
--reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
For me, now it seems no error occur... However, I could not find the reads.sorted.bam in the current directory... so when I run the next step,
samtools index reads.sorted.bam
Expected error occurs: samtools index: failed to open "reads.sorted.bam"; No such file or directory.
I really dont understand what happened. Can anyone help me to find out why I could not create the sorted bam file? Thank you.
|
> Error: samtools sort: couldn't allocate memory for bam_mem
use option -m `(-m INT Set maximum memory per thread; suffix K/M/G recognized [768M])`
to reduce the memory. For example : '-m 10M' but it looks like your computer hasn't enough memory...
|
biostars
|
{"uid": 390096, "view_count": 4222, "vote_count": 1}
|
hi,
I have a long list of different miRNAs like below
hsa-miR-7641
bta-miR-2904
hhi-miR-7641
hsa-miR-4454
bta-miR-2478
efu-let-7c
mmu-miR-6240
hsa-miR-7704
is there any way to avoid searching one by one in mirbase and copy pasting their mature sequences and taking their sequences at the same time?
|
Hi ,
You can use [faSomeRecords][1] to extract multiple fasta records
Usage:
faSomeRecords mature.fasta your-list-file result.fa
in the `grep -f/-F` be careful of space and blank lines. you can check them with [Notepad++][2] in windows.
Take care.
[1]: http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/faSomeRecords
[2]: https://notepad-plus-plus.org/download/v7.html
|
biostars
|
{"uid": 215813, "view_count": 3111, "vote_count": 3}
|
Hi all,
Sorry, if you find the question is basic. Could you please let me know why "A/T" and "C/G" called ambiguous SNP? why if such a SNP, say A/T, locate on one strand, we cannot T/A is on the opposite strand?
Thank you
|
So if you know that a SNP is A/T on the forward strand it is T/A on the reverse strand. That's fine. You can check the reference genome and it's clear which is the reference allele and which is the alternative.
The ambiguity is when someone says something like "A is the minor allele" or "A is associated with this phenotype" or "A is the ancestral allele" and they are not clear about which strand they are talking about. Then you don't know if they mean forward A and T is the minor/phenotype-associated/ancestral, or if forward T reverse A is the minor/phenotype-associated/ancestral.
|
biostars
|
{"uid": 375143, "view_count": 5594, "vote_count": 4}
|
Hi all,
I am using Ensembl VEP (command line) to annotate a VCF I have. I am specifically looking for gnomAD allele frequencies, which is fairly straight forward to do, technically speaking. However, the data looks off in some cases.
For example, when I pass in:
10 69408929 COSM3751912 A T . . GENE=TACR2;STRAND=-;CDS=c.734T>A;AA=p.M245K
I get the VEP output:
COSM3751912 10:69408929 T ENSG00000075073 ENST00000373306 Transcript missense_variant 1278 734 245 M/K aTg/aAg rs55953810 MODERATE - -1 - TACR2 HGNC HGNC:11527 YES ENSP00000362403 P21452 - UPI0000061EE3 - 3/5 - Gene3D:1.20.1070.10,Pfam_domain:PF00001,PROSITE_profiles:PS50262,hmmpanther:PTHR43919,hmmpanther:PTHR43919:SF4,SMART_domains:SM01381,Superfamily_domains:SSF81321,Conserved_Domains:cd16004 1 1 1 1 1 0.9999 0.9999 0.9999 1 0.9999 1 0.9999 1 1 1 gnomAD_ASJ,gnomAD_FIN,gnomAD_OTH,gnomAD_SAS,AFR,AMR,EAS,EUR,SAS - - - - - - -
Jumbling through that, you can see the allele frequencies for `gnomAD_AF` is **0.9999**. This seems odd to me. How could this variant be a COSMIC (cancer database) missense variant, with `MODERATE` consequence, and have **99.99%** frequency. I'm lost on how to interpret this.
Maybe I am misunderstanding how gnomAD scores allele frequencies, hence posting this question here.
**Does anyone know how gnomAD allele frequencies (as outputted by Ensembl's VEP) should be interpreted?**
|
The reference allele, in this case, appears to be a very rare allele. The reference allele is whatever was found in the reference genome, which is a genomic region of a real person, which means that it can be a rare or private allele. In this case, the reference, A, is very rare.
The VEP is giving you the allele frequency of the alternative allele for the variant at this locus, which is [rs55953810](http://www.ensembl.org/Homo_sapiens/Variation/Population?db=core;r=10:69408429-69409429;tl=2uCd0VOAH86sXHFW-4871232;v=rs55953810;vdb=variation;vf=73998235). The alternative allele given in your VEP input is T, so the allele frequency it gives you is for T.
VEP reads VCF format as standard, assuming that the alleles are the forward strand alleles. Your specification of strand in the INFO column is ignored because this is not the standard way to write VCF. This is why your alleles have not been converted.
It appears that you have run your VEP input against GRCh38. gnomAD coordinates are on GRCh37, and are remapped onto GRCh38, so can be looked up by the VEP to find the allele frequencies. This is why there is no variant at that locus in gnomAD, but [that variant identifier and its frequencies do exist](http://gnomad.broadinstitute.org/variant/10-71168685-A-T).
|
biostars
|
{"uid": 355992, "view_count": 4369, "vote_count": 3}
|
Hello all :)
I often find myself having shell scripts - or rather, a list of commands to run via bash - which *could* be run concurrently if only there was a way to do that easily.
For example, here is a snippet one such file:
```
java -jar /ex/picard/picard.jar FastqToSam \
DESCRIPTION="Circadian Day" \
FASTQ="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003_R1_001.fastq.gz" \
FASTQ2="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003_R2_001.fastq.gz" \
OUTPUT="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003.bam" \
SAMPLE_NAME="C57BL/6J" \
LIBRARY_NAME="Mm01.H3K27ac" \
READ_GROUP_NAME="39V34V1.D265MACXX.3.CAGATC" \
SEQUENCING_CENTER="Freiburg" \
PLATFORM="illumina" \
java -jar /ex/picard/picard.jar FastqToSam \
DESCRIPTION="Circadian Day" \
FASTQ="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L004_R1_001.fastq.gz" \
FASTQ2="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L004_R2_001.fastq.gz" \
OUTPUT="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L004.bam" \
SAMPLE_NAME="C57BL/6J" \
LIBRARY_NAME="Mm01.H3K27ac" \
READ_GROUP_NAME="39V34V1.D265MACXX.4.CAGATC" \
SEQUENCING_CENTER="Freiburg" \
PLATFORM="illumina" \
java -jar /ex/picard/picard.jar FastqToSam \
DESCRIPTION="Circadian Day" \
FASTQ="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003_R1_001.fastq.gz" \
FASTQ2="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003_R2_001.fastq.gz" \
OUTPUT="/sequence/44_Mm01_WEAd_C2_H3K27ac_F_1_CAGATC_L003.bam" \
SAMPLE_NAME="C57BL/6J" \
LIBRARY_NAME="Mm01.H3K27ac" \
READ_GROUP_NAME="39V34V1.D2GD2ACXX.3.CAGATC" \
SEQUENCING_CENTER="Freiburg" \
PLATFORM="illumina" \
java -jar /ex/picard/picard.jar FastqToSam \
DESCRIPTION="Circadian Day" \
FASTQ="/sequence/44_Mm01_WEAd_C2_H3K27me3_F_1_CGATGT_L003_R1_001.fastq.gz" \
FASTQ2="/sequence/44_Mm01_WEAd_C2_H3K27me3_F_1_CGATGT_L003_R2_001.fastq.gz" \
OUTPUT="/sequence/44_Mm01_WEAd_C2_H3K27me3_F_1_CGATGT_L003.bam" \
SAMPLE_NAME="C57BL/6J" \
LIBRARY_NAME="Mm01.H3K27me3" \
READ_GROUP_NAME="39V34V1.D265MACXX.3.CGATGT" \
SEQUENCING_CENTER="Freiburg" \
PLATFORM="illumina" \
..
```
Lets say there were 1000 of such commands in a file called **jobs.sh**, and I could run it with `bash jobs.sh` to have each one done one-after-the-other. However, since each job clearly does not rely on one another being completed, it might be nice to run them in parallel, say 5 at a time.
Bash natively has no way to do that without some really complicated logic wrapping each command, deciding when and when not to background a command and proceed to the next.
The Unix tool **parallel** can do it with something like:
parallel --jobs 5 sh -c {} < jobs.sh
However this will only work if the commands in jobs.sh are one-per-line. Multiline commands like the ones above break it. Furthermore, if the commands behave differently when their stdout/stderr are being redirected, they will detect that parallel is redirecting and, well, act differently - no status output, refuse to run, etc.
You can also do this with native Unix **xargs** (https://www.gnu.org/software/parallel/man.html#DIFFERENCES-BETWEEN-xargs-AND-GNU-Parallel) but its fiddlier and has all the same issues as parallel.
So my question is two-fold.
1. Does anyone know of a tool to parallelize a list of commands that would otherwise run consecutively in bash with little extra effort. Ideally each command gets run into it's own screen session so we can drop in at any time to see how its getting along (and no stdout/err redirection issues), and also spin up/down more worker processes as we go along, etc.
2. How do you manage jobs currently? Perhaps you never have to run 100 nearly-identical commands. Perhaps your commands are generated by something else (like Galaxy) which manages jobs for you also. Perhaps you, like I was doing last week, open up 32 new terminal windows to run 1/32th of the commands of jobs.sh in each :P Or perhaps something much more sophisticated.... hopefully :)
|
You're looking for **GNU make** and the `-j` option. Some clusters use a custom version of make (e.g., 'qmake' for SGE )
Search biostars for some other solutions (snakemake, ... ): e.g: https://www.biostars.org/p/115745/,
An example of Makefile:
```
samtools.dir=../samtools/
samtools.exe=${samtools.dir}samtools
wgsim.exe=${samtools.dir}misc/wgsim
bwa.exe=../bwa/bwa
picard.jar=../picard-tools-1.138/picard.jar
bcftools.exe=../bcftools/bcftools
REF=ref.fa
mutations.vcf.gz : ${REF} \
$(addsuffix .bam.bai ,S1) \
$(addsuffix .bam.bai ,S2) \
$(addsuffix .bam.bai ,S3) \
$(addsuffix .bam.bai ,S4)
${samtools.exe} mpileup -u -f $< $(filter %.bam,$(basename $^)) |\
${bcftools.exe} call -c -v -O z -o $@ -
$(addsuffix .bam.bai ,S1): $(addsuffix .bam,S1)
${samtools.exe} index $<
$(addsuffix .bam,S1): \
$(addsuffix .bam,S1_01_R1.fq.gz) \
$(addsuffix .bam,S1_02_R1.fq.gz) \
$(addsuffix .bam,S1_03_R1.fq.gz)
java -jar ${picard.jar} MergeSamFiles AS=true O=$@ $(foreach B,$^, I=${B} )
$(addsuffix .bam,S1_01_R1.fq.gz): $(addsuffix .bwt,${REF}) S1_01_R1.fq.gz S1_01_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S1\tSM:S1' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S1_01_R2.fq.gz : S1_01_R1.fq.gz
gzip --best -f $(basename $@)
S1_01_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 1 -N 1000 $< $(basename S1_01_R1.fq.gz S1_01_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S1_02_R1.fq.gz): $(addsuffix .bwt,${REF}) S1_02_R1.fq.gz S1_02_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S1\tSM:S1' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S1_02_R2.fq.gz : S1_02_R1.fq.gz
gzip --best -f $(basename $@)
S1_02_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 1 -N 1000 $< $(basename S1_02_R1.fq.gz S1_02_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S1_03_R1.fq.gz): $(addsuffix .bwt,${REF}) S1_03_R1.fq.gz S1_03_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S1\tSM:S1' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S1_03_R2.fq.gz : S1_03_R1.fq.gz
gzip --best -f $(basename $@)
S1_03_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 1 -N 1000 $< $(basename S1_03_R1.fq.gz S1_03_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam.bai ,S2): $(addsuffix .bam,S2)
${samtools.exe} index $<
$(addsuffix .bam,S2): \
$(addsuffix .bam,S2_01_R1.fq.gz) \
$(addsuffix .bam,S2_02_R1.fq.gz) \
$(addsuffix .bam,S2_03_R1.fq.gz)
java -jar ${picard.jar} MergeSamFiles AS=true O=$@ $(foreach B,$^, I=${B} )
$(addsuffix .bam,S2_01_R1.fq.gz): $(addsuffix .bwt,${REF}) S2_01_R1.fq.gz S2_01_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S2\tSM:S2' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S2_01_R2.fq.gz : S2_01_R1.fq.gz
gzip --best -f $(basename $@)
S2_01_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 2 -N 1000 $< $(basename S2_01_R1.fq.gz S2_01_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S2_02_R1.fq.gz): $(addsuffix .bwt,${REF}) S2_02_R1.fq.gz S2_02_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S2\tSM:S2' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S2_02_R2.fq.gz : S2_02_R1.fq.gz
gzip --best -f $(basename $@)
S2_02_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 2 -N 1000 $< $(basename S2_02_R1.fq.gz S2_02_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S2_03_R1.fq.gz): $(addsuffix .bwt,${REF}) S2_03_R1.fq.gz S2_03_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S2\tSM:S2' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S2_03_R2.fq.gz : S2_03_R1.fq.gz
gzip --best -f $(basename $@)
S2_03_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 2 -N 1000 $< $(basename S2_03_R1.fq.gz S2_03_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam.bai ,S3): $(addsuffix .bam,S3)
${samtools.exe} index $<
$(addsuffix .bam,S3): \
$(addsuffix .bam,S3_01_R1.fq.gz) \
$(addsuffix .bam,S3_02_R1.fq.gz) \
$(addsuffix .bam,S3_03_R1.fq.gz) \
$(addsuffix .bam,S3_04_R1.fq.gz)
java -jar ${picard.jar} MergeSamFiles AS=true O=$@ $(foreach B,$^, I=${B} )
$(addsuffix .bam,S3_01_R1.fq.gz): $(addsuffix .bwt,${REF}) S3_01_R1.fq.gz S3_01_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S3\tSM:S3' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S3_01_R2.fq.gz : S3_01_R1.fq.gz
gzip --best -f $(basename $@)
S3_01_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 3 -N 1000 $< $(basename S3_01_R1.fq.gz S3_01_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S3_02_R1.fq.gz): $(addsuffix .bwt,${REF}) S3_02_R1.fq.gz S3_02_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S3\tSM:S3' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S3_02_R2.fq.gz : S3_02_R1.fq.gz
gzip --best -f $(basename $@)
S3_02_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 3 -N 1000 $< $(basename S3_02_R1.fq.gz S3_02_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S3_03_R1.fq.gz): $(addsuffix .bwt,${REF}) S3_03_R1.fq.gz S3_03_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S3\tSM:S3' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S3_03_R2.fq.gz : S3_03_R1.fq.gz
gzip --best -f $(basename $@)
S3_03_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 3 -N 1000 $< $(basename S3_03_R1.fq.gz S3_03_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam,S3_04_R1.fq.gz): $(addsuffix .bwt,${REF}) S3_04_R1.fq.gz S3_04_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S3\tSM:S3' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S3_04_R2.fq.gz : S3_04_R1.fq.gz
gzip --best -f $(basename $@)
S3_04_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 3 -N 1000 $< $(basename S3_04_R1.fq.gz S3_04_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bam.bai ,S4): $(addsuffix .bam,S4)
${samtools.exe} index $<
$(addsuffix .bam,S4): \
$(addsuffix .bam,S4_01_R1.fq.gz)
java -jar ${picard.jar} MergeSamFiles AS=true O=$@ $(foreach B,$^, I=${B} )
$(addsuffix .bam,S4_01_R1.fq.gz): $(addsuffix .bwt,${REF}) S4_01_R1.fq.gz S4_01_R2.fq.gz
${bwa.exe} mem -R '@RG\tID:S4\tSM:S4' ${REF} $(filter %.gz,$^) |\
${samtools.exe} view -Sbu - |\
${samtools.exe} sort -O sam -o $@ -T $(basename $@) -
S4_01_R2.fq.gz : S4_01_R1.fq.gz
gzip --best -f $(basename $@)
S4_01_R1.fq.gz : ${REF}
@sleep 1
${wgsim.exe} -r 0.01 -S 4 -N 1000 $< $(basename S4_01_R1.fq.gz S4_01_R2.fq.gz)
gzip --best -f $(basename $@)
$(addsuffix .bwt,${REF}) : ${REF}
${bwa.exe} index $<
${REF}:
echo ">rotavirus" > $@
echo "GGCTTTTAATGCTTTTCAGTGGTTGCTGCTCAAGATGGAGTCTACTCAGCAGATGGTAAGCTCTATTATT" >> $@
echo "AATACTTCTTTTGAAGCTGCAGTTGTTGCTGCTACTTCAACATTAGAATTAATGGGTATTCAATATGATT" >> $@
echo "ACAATGAAGTATTTACCAGAGTTAAAAGTAAATTTGATTATGTGATGGATGACTCTGGTGTTAAAAACAA" >> $@
echo "TCTTTTGGGTAAAGCTATAACTATTGATCAGGCGTTAAATGGAAAGTTTAGCTCAGCTATTAGAAATAGA" >> $@
echo "AATTGGATGACTGATTCTAAAACGGTTGCTAAATTAGATGAAGACGTGAATAAACTTAGAATGACTTTAT" >> $@
echo "CTTCTAAAGGGATCGACCAAAAGATGAGAGTACTTAATGCTTGTTTTAGTGTAAAAAGAATACCAGGAAA" >> $@
echo "ATCATCATCAATAATTAAATGCACTAGACTTATGAAGGATAAAATAGAACGTGGAGAAGTTGAGGTTGAT" >> $@
echo "GATTCATATGTTGATGAGAAAATGGAAATTGATACTATTGATTGGAAATCTCGTTATGATCAGTTAGAAA" >> $@
echo "AAAGATTTGAATCACTAAAACAGAGGGTTAATGAGAAATACAATACTTGGGTACAAAAAGCGAAGAAAGT" >> $@
echo "AAATGAAAATATGTACTCTCTTCAGAATGTTATCTCACAACAGCAAAACCAAATAGCAGATCTTCAACAA" >> $@
echo "TATTGTAGTAAATTGGAAGCTGATTTGCAAGGTAAATTTAGTTCATTAGTGTCATCAGTTGAGTGGTATC" >> $@
echo "TAAGGTCTATGGAATTACCAGATGATGTAAAGAATGACATTGAACAGCAGTTAAATTCAATTGATTTAAT" >> $@
echo "TAATCCCATTAATGCTATAGATGATATCGAATCGCTGATTAGAAATTTAATTCAAGATTATGACAGAACA" >> $@
echo "TTTTTAATGTTAAAAGGACTGTTGAAGCAATGCAACTATGAATATGCATATGAGTAGTCATATAATTAAA" >> $@
echo "AATATTAACCATCTACACATGACCCTCTATGAGCACAATAGTTAAAAGCTAACACTGTCAAAAACCTAAA" >> $@
echo "TGGCTATAGGGGCGGTTTGTGACC" >> $@
echo "" >> $@
graph.png:
make -ndrB -f Makefile | ../makefile2graph/make2graph | dot -Tpng -o$@
```
|
biostars
|
{"uid": 174468, "view_count": 3840, "vote_count": 5}
|
Suppose you have three experimental conditions for your RNA-seq: control, compound A treatment, compound B treatment, each with replicates. Your RNA-seq samples look like the following:
c1,c2,a1,a2,b1,b2
And your factors are (1,1,2,2,3,3).
And you use edgeR for the analysis. You have two questions to answer: what are the DEGs for treatment A, and what are the DEGs for treatment B. Let's just focus on the 1st question -- what are the DEGs for treatment A?
In terms of sample inclusion for analysis, we have two options:
1) c1,c2, a1,a2. You just compare (a1,a2) with (c1,c2), e.g., coef=2
2) c1,c2, a1,a2, b1,b2. You compare (a1,a2) with (c1,c2), but you use contrast=c(-1,1,0).
The pros of using option 1) is that it's simple, because we do it all the time -- pairwise comparison. The pros of option 2) is that it uses more samples and should give you more statistic power, the cons is that it's not as simple, and the inclusion of condition B samples could complicate your analysis and usually makes you hard to explain to your biologists.
I compared the above two options, and found they generated somewhat different results. Power-wise (min p-values) sometimes option 1) is better, sometimes 2) is better.
Which way is a better practice?
Thanks!
|
In my opinion, the second option is by far the best, for two reasons :
1. As you said, the estimation of variance is better with more samples. By the way, this doesn't always mean more power (min *pvalues*). In some cases, the added samples increase the estimation of variance, so the *pvalue* increases, but for good reason.
2. It makes sense to normalize and process all your samples together, since they are from the same study. The fact that you can answer both your questions (effect of A and B) from the same dataset, only changing the contrast, makes **the option B simpler than A**. If you choose option A, the size factors and the estimation of variance would not be the same in dataset (c1,c2, a1,a2) and (c1,c2, b1,b2). Imagine showing biologist two excel sheets with the normalized raw counts, one for dataset (A vs C), the other for dataset (B vs C). Wouldn't it be confusing for him to see that the value for gene X in condition c1 is different in both sheets ?
|
biostars
|
{"uid": 288580, "view_count": 1710, "vote_count": 1}
|
Hi everyone!
I am working with RNA-seq paired-end data. When I run tophat-fusion on a dataset, I get mapping percentage of 70-80% for both left and right reads and the concordant pair alignment rate is about 72%. However, if I run Tophat without fusion search, I get the same mapping percentage for left and right reads, but the concodant alignment rate decreases to 30-40%. I am running both commands with the same arguments.
Does anyone know why the concordant pair alignment rate decreases with tophat-fusion? My guess would be that it should increase the concordant alignment rate as it is going to be able to map fusion events that would be unmapped by tophat.
Thank you very much!
|
Tophat fusion removes reads that map to more than two places in the genome, thus, the tophat mapping percentage can be higher because it won't remove multimappers by default
|
biostars
|
{"uid": 218196, "view_count": 1777, "vote_count": 1}
|
I am building a large database for my analysis tool ~3 Terabytes.
The issue is distributing this database for people to use it.
A solution we have learned from working with sequencing cores is to ship a hard drive by FedEx. A possibly expensive and annoying proposition.
Another solution is to host the files for download, but this seems possibly expensive or possibly impractical for the audience. (On this point of practicality -- most Universities have gigabit level bandwidth, perhaps 3 Terabytes is no longer a ludicrous download size, simply a big one.) At 1 gigabit, I estimate the download would take 400 minutes.
Since this is a niche' problem, I am hoping someone may have had a similar issue and can provide advice?
Thank you
Jeremy Cox
|
Shipping disk is the most reliable solution once you've reached the TB range. The problem with sharing over the network is that the speed is only going to be as high as the network bottleneck between the two ends (including bottleneck inside the institutional networks) then there's some overhead depending on the protocol. Also the connections are often not stable over the many hours (up to days) required to transfer a large data set so you need mechanisms that can automatically reconnect and resume where the transfer was interrupted eventually replacing files that were corrupted when the connection was lost. There are technologies for large data transfer over the network (e.g. Aspera, GridFTP) but you end up running into costs and deployment issues.
You should also consider not transferring the data. Maybe you can provide local compute to your collaborators or allow them to extract relevant subsets and only transfer those.
|
biostars
|
{"uid": 222238, "view_count": 1663, "vote_count": 1}
|
<p>Hi guys,</p>
<p>I wonder if there're any up-to date publicly available genomes of bacteria, I could use as an input for Blast/Blat (fasta/2bit format) search. If so, would you send me links? I'm aware of database '16S ribosomal RNA sequences (Bacteria and Archaea)', however it's coverage is rather low. I appreciate your experiences.</p>
<p><strong>EDIT</strong>: thank you guys, you've helped me a lot. It's a shame, that as an answer just one can be picked :)</p>
|
<p>You can get a lot from <a href="http://www.hmpdacc.org/HMREFG/">Human Microbiome Project</a>.</p>
|
biostars
|
{"uid": 54480, "view_count": 2771, "vote_count": 2}
|
I am trying to subset a fasta file like described in another post (https://www.biostars.org/p/93056/). I simply want to be able to extract a subset of sequences from a fasta file.
I load the package and the data as described in the other thread:
``
library("seqinr")
fastafile<- read.fasta(file = "proteins.fasta",
seqtype = "AA",as.string = TRUE, set.attributes = FALSE)
```
as well as the list of IDs I was to use for subsetting (the column in `subsetlist` is called `id`).
subsetlist<-read.table("~/scripts/subsetfasta/test.txt", header=TRUE)
when I attempt to use the solution from the previous thread:
fastafile[names(fastafile) %in% subsetlist$id]
I get the following:
named list()
What am I doing wrong or missing?
best regards
Henrik
|
Looks like none of your fasta record names are in the subset list. You can check by just calling
names(fastafile) %in% subsetlist$id
Which will be a vector of FALSEs if that's the case.
You probably want to check exactly how the subset list and fasta file IDs are formatted, and use (g)sub, paste or related functions to get them to match.
|
biostars
|
{"uid": 125183, "view_count": 18758, "vote_count": 1}
|
Hi,
I'm working on RNA-seq data and have aligned my reads to the assembled transcript using bwa-mem. When I tried to run Picard Tools (`SamFormatConverter.jar`), the SAM dictionary threw an exception
"Cannot add sequence that already exists in SAMSequenceDictionary".
I then checked for duplication in the SAM file header and sure enough, two entries were duplicated. How do I correct this? Commands used are given below:
bwa index contigs.fa
bwa mem contigs.fa reads_1.fq reads_2.fq >samFile.sam
java -Xmx4g -jar SamFormatConverter.jar INPUT=samFile.sam OUTPUT=bamFile.bam
This is where I faced the error.
I checked for duplicates using:
samtools view -HS samFile.sam | sort | uniq -c
2 @SQ entries were present >1 times.
Any help would be appreciated, thanks!
|
The `@SQ` headers that bwa emits come directly from the reference sequences in the index you're mapping against.
So the likely explanation for your problem is that the same entries are duplicated in your `contigs.fa` file. You can check by looking at the `>` headers in your fasta file:
grep '>' contigs.fa | sort | uniq -c
(If there are header lines like `>foo blah` and `>foo blah blah` then these will lead to duplicate `@SQ SN:foo` SAM headers while not being shown as exact duplicate lines by that `uniq` command. So you may have to be a little careful in interpreting the results of that command.)
|
biostars
|
{"uid": 81520, "view_count": 8091, "vote_count": 1}
|
Hello dears,
We can index the files to access random sequences fast, and it should be sorted before indexed for the SAM/BAM.
But when it turns to FASTA files, I could not understand why we can directly index without sort first ?
samtools faidx <ref.fa>
Are there any easy explanation? THANKS
|
Hello https://www.biostars.org/u/45640/ ,
indexing is a very fascinating topic.
The index file produced by `samtools faidx` and `.bai` have very different structures. I guess there are mainly two reasons for it:
1. In a `bam`file we have typically much more entrys than in a `fasta` file.
2. The way we query the data. For a `fasta` file we typically ask "Give me the sequence with the id XY". For `bam` files we ask "Give me all reads that overlap a region"
The `fasta` index is quite simply. It just contains the name of sequences, where in our file the header starts, how long the header is and how much bases the sequence have. See the [specs][1] for it. As the number of sequence in a `fasta` file is quite small (compared to a `bam` file) we can just iterate over the index file to find the offset of a sequence we like to have in a reasonable time.
In case of the `bam` the index file is organized in bins, which contains the offset of reads that overlap a region. See the [sam specs][2] for it . To be able to say where a bin begins and end it is necessary to sort the `bam` file.
fin swimmer
[1]: http://www.htslib.org/doc/faidx.html
[2]: http://samtools.github.io/hts-specs/SAMv1.pdf
|
biostars
|
{"uid": 339775, "view_count": 6170, "vote_count": 1}
|
Hi there,
I've used VEP (GRCh38.90) with the option --canonical to annotate my variants from WES and want to focus on canonical transcripts. This gives me a column if the associated transcript is the canonical one.
Now I'd like to retrieve a list of all canonical transcripts as used by VEP and failed to do so. I've tried to get all canonical transcripts following this thread https://groups.google.com/forum/#!topic/biomart-users/skO4zgqzGBA and I've downloaded knownCanonical.txt from UCSC.
However, both approaches have discrepancies to VEP. For example VEP reports ENST00000288602 as the canonical transcript of BRAF, while the output from biomart reports ENST00000496384 and knownCanonical reports ENST00000646891. I would like to go with VEP, so is there a way to obtain such a list from the VEP cache for example?
Any help is greatly appreciated.
|
[Glossary](http://www.ensembl.org/info/website/glossary.html)
**Canonical transcript**
The canonical transcript is used in the gene tree analysis in Ensembl and does not necessarily reflect the most biologically relevant transcript of a gene. For human, the canonical transcript for a gene is set according to the following hierarchy:
1. Longest CCDS translation with no stop codons.
2. If no (1), choose the longest Ensembl/Havana merged translation with no stop codons.
3. If no (2), choose the longest translation with no stop codons.
4. If no translation, choose the longest non-protein-coding transcript.
|
biostars
|
{"uid": 382078, "view_count": 3841, "vote_count": 2}
|
Consider the following example from [bedtools intersect help page][1]. Say, there are 4 bed files as follows
$ cat query.bed
chr1 1 20
chr1 40 45
chr1 70 90
chr1 105 120
chr2 1 20
chr2 40 45
chr2 70 90
chr2 105 120
chr3 1 20
chr3 40 45
chr3 70 90
chr3 105 120
chr3 150 200
chr4 10 20
--
$ cat d1.bed
chr1 5 25
chr1 65 75
chr1 95 100
chr2 5 25
chr2 65 75
chr2 95 100
chr3 5 25
chr3 65 75
chr3 95 100
--
$ cat d2.bed
chr1 40 50
chr1 110 125
chr2 40 50
chr2 110 125
chr3 40 50
chr3 110 125
--
$ cat d3.bed
chr1 85 115
chr2 85 115
chr3 85 115
If we use bedtools like this
$ bedtools intersect -a query.bed \
-b d1.bed d2.bed d3.bed
chr1 5 20
chr1 40 45
chr1 70 75
chr1 85 90
chr1 110 120
chr1 105 115
chr2 5 20
chr2 40 45
chr2 70 75
chr2 85 90
chr2 110 120
chr2 105 115
chr3 5 20
chr3 40 45
chr3 70 75
chr3 85 90
chr3 110 120
chr3 105 115
It reports the interval for e.g. `chr1 5 20` , however, I don't want that to be reported, because it is not present in `d3.bed`. I know that it is being reported because it is overlapping in `query.bed` and `d1.bed`. But my requirement is different. How can I tweak `bedtools` to get ONLY those records which are overlapping across all files.
[1]: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html
|
Try the following
bedtools multiinter -i *.bed > test1
files="$(ls *.bed|wc -l)"
awk -v OFS='\t' -v files=$files '$4==files' test1 |cut -f 1-3 > test2
What it does:
bedtools multiinter -i *.bed > test1 # use bedtools multi intersect to get statistics on intersections of all bed files
files="$(ls *.bed|wc -l)" # make a variable called files that is the number of bed files in the directory
awk -v OFS='\t' -v files=$files '$4==files' test1 |cut -f 1-3 > test2 # get only regions that are found in all of the files (column 4 is num, which is the number of input files that have that region)
test2 is the file that contains regions found in all files assuming that the only bed files in the directory are query.bed, d1.bed, d2.bed, and d3.bed
|
biostars
|
{"uid": 368263, "view_count": 1447, "vote_count": 1}
|
Hello
I have a few of questions while studying tool MARKDUPLICATES metrics file.
A list of `SECONDARY_OR_SUPPLEMENTARY_RDS` is there.
I know SECONDARY READS, but I am wondering about SUPPLEMENTARY READS.
Thanks in advance for full explanation.
|
Section 1.2 "Terminologies and Concepts" of the [SAM file format specifications][1] explain this:
> Chimeric alignment
>
>An alignment of a read that cannot be represented
> as a linear alignment. A chimeric alignment is represented as a set of
> linear alignments that do not have large overlaps. Typically, one of
> the linear alignments in a chimeric alignment is considered the
> “representative” alignment, and the others are called “supplementary”
> and are distinguished by the supplementary alignment flag. All the SAM
> records in a chimeric alignment have the same QNAME and the same
> values for 0x40 and 0x80 flags (see Section 1.4). The decision
> regarding which linear alignment is representative is arbitrary.
[1]: https://samtools.github.io/hts-specs/SAMv1.pdf
|
biostars
|
{"uid": 308853, "view_count": 12103, "vote_count": 2}
|
Hi all,
I am using clusterProfiler to perform KEGG enrichment. I have a list of gene Symbol (Rat genes). First I need to translate the symbols to EntrezID.
I installed the Genome wide annotation for Rat:
source("https://bioconductor.org/biocLite.R")
biocLite("org.Rn.eg.db")
Then I tried to run the command:
x <- c("GPX3", "GLRX", "LBP", "CRYAB", "DEFB1", "HCLS1", "SOD2", "HSPA2",
"ORM1", "IGFBP1", "PTHLH", "GPC3", "IGFBP3","TOB1", "MITF", "NDRG1",
"NR1H4", "FGFR3", "PVR", "IL6", "PTPRM", "ERBB2", "NID2", "LAMB1",
"COMP", "PLS3", "MCAM", "SPP1", "LAMC1", "COL4A2", "COL4A1", "MYOC",
"ANXA4", "TFPI2", "CST6", "SLPI", "TIMP2", "CPM", "GGT1", "NNMT",
"MAL", "EEF1A2", "HGD", "TCN2", "CDA", "PCCA", "CRYM", "PDXK",
"STC1", "WARS", "HMOX1", "FXYD2", "RBP4", "SLC6A12", "KDELR3", "ITM2B")
eg = bitr(x, fromType="SYMBOL", toType="ENTREZID", OrgDb="org.Rn.eg.db")
Then I got the warning message:
> 'select()' returned 1:1 mapping between keys and columns Warning
> message: In bitr(x4, fromType = "SYMBOL", toType = "ENTREZID", OrgDb =
> Rat) :
> 98.21% of input gene IDs are fail to map...
Could anyone tell me what happened and how should I do?
Many thanks,
Stanley
|
The root of your problem is that your gene names are all upper case, but for rat (and mouse etc) generally only the first letter is capitalised, and the `bitr` conversion is case sensitive. We can use the function `str_to_title` in the **stringr** package to fix this:
install.packages('stringr')
x2 <- stringr::str_to_title(x)
head(x2)
Here's what they look like now:
[1] "Gpx3" "Glrx" "Lbp" "Cryab" "Defb1" "Hcls1"
Then rerun your query:
eg = bitr(x2, fromType="SYMBOL", toType="ENTREZID", OrgDb="org.Rn.eg.db")
and we get a much better proportion of converted IDs
'select()' returned 1:1 mapping between keys and columns
Warning message:
In bitr(str_to_title(x), fromType = "SYMBOL", toType = "ENTREZID", :
1.79% of input gene IDs are fail to map...
Looking at some of the other answers, **PVR** is an exception to the capitalisation rule. Since Kevin's **biomaRt** answer isn't case sensitive, it's perhaps the best solution assuming you have reliable internet.
|
biostars
|
{"uid": 296321, "view_count": 8203, "vote_count": 2}
|
I am a complete newbie to machine learning and this is my first try, but I am not sure I am approaching this field in the right way so would very appreciate feedback.
My data and what I am trying to answer:
I have a data frame filled with IDs measured across three different features, a classification based on IDs, measured Features I would like to test, and values that depend on the particular ID-Feature combination.
My goal is to test whether one Feature, or a combination of Features, can be predictive of the Class_Name (defined from the ID). For now the data is in long format as below (example, the number of IDs varies for each feature and it is in the order of 100, so bigger than in this example):
df <- data.frame (Name = c("ID1", "ID2", "ID3", "ID4", "ID5", "ID6", "ID9", "ID5", "ID7", "ID8", "ID9", "ID10", "ID4", "ID11", "ID12", "ID8", "ID9", "ID11", "ID13", "ID8", "ID12", "ID4"), Class_Name = c("orange", "orange", "red", "orange", "blue", "blue", "red", "blue", "orange", "blue", "red", "red", "orange", "orange", "red", "blue", "red", "orange", "red", "blue", "red", "orange"), Features = c("F1","F1","F1", "F1","F1","F1","F1","F2","F2","F2","F2","F2","F2","F2","F2","F3", "F3", "F3", "F3", "F3", "F3", "F3"), Values = c(21, 32, -36, 11, -62, 32, 34, -21, -43, -68, -24, -19, 28, 33, -33, 15, 2, 13, -99, -86, 3, 0))
df$Class_Name= as.factor(df$Class_Name)
I think I need to test each Feature in turn (script below). However, how do I then test whether combinations of features help with the prediction?
I am following tutorials on caret particularly [this][1]. (If anyone knows of tutorials in R with examples like this one, please post them), but I am afraid I am not approaching my question in the correct way, and I was wondering if you could direct me to the correct path.
Script I have tried:
library(caret)
# library(dplyr)
# table(df$Class_Name)
# select 2 so have balanced set for each class?
# df <- df %>% group_by(Features, Class_Name) %>% slice_sample(n=2) %>% data.frame() # this will be 100 in my data
control <- trainControl(method="cv", number=10)
metric <- "Accuracy"
features = unique(df$Features)
res = data.frame()
for (f in features) {
dataset = df[df$Features == f, ]
# Random Forest
fit.rf <- train(Class_Name~Values, data=dataset, method="rf", metric=metric, trControl=control)
print(fit.rf)
}
[1]: https://machinelearningmastery.com/machine-learning-in-r-step-by-step/
|
For testing multiple interactions, you can use:
model = randomForest(response ~ (var1 + var2 + var3)^2, data = your_data)
See a similar question from Stack Overflow here:
[Interaction Effects in R][1]
[1]: https://stackoverflow.com/questions/39886562/how-to-check-interaction-effects-for-a-lot-of-predictors-in-r
However, I believe caret is automatically looking for interaction effects.
To see which variables are contributing the most to your model, use varImp().
You can also check out the book "An Introduction to Statistical Learning" by James et al., which should answer all of your questions.
|
biostars
|
{"uid": 9522158, "view_count": 797, "vote_count": 1}
|
I am trying to make UpSetR plot for nine vcf files using [working code][1]. However, the output plot shows only five files out of total nine files.
[1]: https://www.biostars.org/p/379679/#379721
|
Try increasing `nsets` when calling `upset`, like:
```
upset(fromList(read_sets), order.by = "freq", nsets = 9)
```
`?upset` shows:
>nsets Number of sets to look at
|
biostars
|
{"uid": 379930, "view_count": 386, "vote_count": 1}
|
Dear all,
Do you have any idea how to easy extract contigs from fasta file wich contains specific sequence?
For example:
My sequence:
ACCGTACCC
My FASTA:
```
>c1042
ACCGTACCC
>c1043
GCTACAGTTGAAAGGGGACCGTACCC
>c1044
ATGAATAAAATAATTTTGTATCATAAATCGAGCTGTTAATTATT
>c1044
TTCATATTTGTAGCTAAGCAGAGGCGAAGCGTTCTTGTATCG
```
My output:
```
>c1042
ACCGTACCC
>c1043
GCTACAGTTGAAAGGGGACCGTACCC
```
Thank you so much for any ideas and help.
|
Use biopython or bioperl. With biopython, you could either use the re module or even just `find()` on the `str()` representation of each sequence. Either of these should be simple enough if you're familiar with either perl or python.
|
biostars
|
{"uid": 125610, "view_count": 5842, "vote_count": 3}
|
I'm at ftp://ftp.ncbi.nih.gov/entrez/misc/data/gc.prt with all the genetic codes. Trying to figure out what genetic code *Nostoc punctiforme* uses. Is there a database that has which version they use? I know it's not standard and they have multiple start codons.
|
<p>If you go to <a href="http://www.ncbi.nlm.nih.gov/Taxonomy/taxonomyhome.html/">NCBI Taxonomy</a> and search for <em>Nostoc punctiforme</em>, you'll reach its Taxonomy page with links for several strains, then clicking on the top level <em>Nostoc punctiforme</em> link will take you to the <a href="http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=272131&lvl=3&p=mapview&p=has_linkout&p=blast_url&p=genome_blast&lin=f&keep=1&srchmode=1&unlock">page</a> which will have the genetic <a href="http://www.ncbi.nlm.nih.gov/Taxonomy/taxonomyhome.html/index.cgi?chapter=cgencodes#SG11">code</a> you are looking for.</p>
|
biostars
|
{"uid": 150706, "view_count": 1502, "vote_count": 1}
|
I think I missed a detail in my understanding of missing residues. I understand that in some cases residues don't end up as part of the PDB model because of crystallization issues. However, aside from that case, will numbers be missing in the sequence of residues even though the residues are contiguous for the same reason that insert records may be used in the opposite scenario? In those cases, during processing they should be treated as contiguous?
Is this part of the reason why software packages such as Bio3d offer features for detecting chain breaks? Is this sort of software applicable to distinguishing the above cases?
|
Yes, as you wrote, the residue sequence number in the PDB format doesn't need to be sequential.
It's well explained by F.C. Bernstein here:
https://lists.sdsc.edu/pipermail/pdb-l/2004-March/001513.html
And some atoms or residues may be missing from the model.
How to distinguish the two cases?
If the PDB file has the SEQRES record - it should contain the list of all residues in the chain -- also the ones missing from the model. So this is one way of checking for missing residues.
Alternatively, you may check distances between atoms.
(I don't know anything about Bio3d).
|
biostars
|
{"uid": 248894, "view_count": 2365, "vote_count": 1}
|
Hi guys,
I have protein expression data in a data frame df, where the proteins are rows and columns are sample ids with abundance values. Such as:
Sample 1 Sample 2 Sample 3
RPH3A
CA11
AIFM1
I want to keep those proteins that I have data for in at least 50 % of the samples. Any help would be appreciated?
|
You can calculate the percentage of NAs in each row and add that as a column to your data frame in R. Then you can use awk to remove rows that have more than 50% NAs.
In R:
count_na_func = function(x) {
sum(is.na(x))
}
df$na_percent = (apply(df, 1, count_na_func))/(ncol(df) - 1)
write.table(df, file = "datana.tsv", row.names=FALSE, sep="\t")
You can use some practice data to make sure that the na_percent column is computed correctly in R.
In bash:
awk '{if($12<=0.5){print}}' datana.tsv > newdata.tsv
Note that R is 1-indexed, while bash is 0-indexed. na_percent was in column 12 of my data frame, but it will probably be in a different column in yours, so you can substitute that column number for 12 in the awk code.
|
biostars
|
{"uid": 9525409, "view_count": 387, "vote_count": 1}
|
Hi
I have a list of chromosomes and positions that looks like this:
1 10045
1 93056
1 109272
1 127711
1 127822
.
.
.
And now I would like to use it to remove them from my vcf file. Do you know how to do this?
|
[bcftools][1] can do this:
$ bcftools view -T ^list_snp_exclude.txt input.vcf > output.vf
With the `^` before the file with the coordinates one tell `bcftools` to exclude these regions.
fin swimmer
[1]: https://www.htslib.org/doc/bcftools.html#common_options
|
biostars
|
{"uid": 219946, "view_count": 7984, "vote_count": 1}
|
Hi Everyone,
I am working on Whole Genome Sequencing and analysis of Human genome from illumina HiSeq platform with about 30X coverage. Each sample (human genome) have about 250-300 fastq.gz files, whom I am dealing with 'fastqc' for quality check using following command :
/usr/local/bin/fastqc -t 8 -f fastq -o OUT/ -casava *.gz -noextract
Although it is running fine and generating equal number of "fastqc.zip" files which I unzipped using unzip '*.zip'. So, here I have 2 questions:
1. Can I merge two or more fastq files and then run fastqc on those merged files? If yes, how should I merge those fastq files?
2. I have to manually check 250-300 fastqc folder to know the quality by opening .html page. Is there any way where I can have summary of overall quality of the fastq files in a flowcell?
Please let me know your comments. I'll be highly thankful to you.
Best,
Ravi
|
Not only can you merge the fastq files but your life might be easier if you do. For merging them, a simple `cat` will suffice. I should note that you don't have to be delivered 300 some odd files, you can request that whomever is doing the sequencing just give you a two files (assuming paired-end) per sample/library (the `bcl2fastq` program that they use to process the bcl files produced by the sequencer can trivially do this).
If you don't want to wait until all of the files are merged, you can likely just use a named pipe as input to fastqc. Something like:
mkfifo foo.fastq.gz
cat sample_L1_R1_???.fastq.gz > foo.fastq.gz
fastqc foo.fastq.gz
Given that fastqc is written in java, I can't guarantee that it'll properly handle block gzipped files like that (the java gzip library has been broken for years). You can always `zcat` instead. I should note that the only reason process substitution likely wouldn't work is that fastqc names the output files after the input file name.
For 2. it depends a bit on what you want. The sequencing facility actually has an idea about that already (it's produced by the machine). It's easy enough to just ask them (they can also give you a break down of how many reads per sample, their average quality (also per sample), etc.). For our internal pipeline, I have a pdf produced with that sort of information, since it's a bit quicker to look first at a single table like that than to trudge through all of the fastqc files. BTW, fastqc also produces an HTML file with the images included. When I QC flowcells before sending results to our local groups those are what I personally look at...it's quicker than dealing with the zip files.
|
biostars
|
{"uid": 141797, "view_count": 37455, "vote_count": 4}
|
Hello,
I have a matrix in which I am trying to retrieve specific rows from it and save it in text
An example matrix is
```
Affy ID DDM1 DGKI2 FDGYYY1 GUHIL6
1438_at 0.0635 0.2065 -0.2112 0.0856
1487_at 0.071 -0.1315 0.0263 0.0198
1494_f_at 0.0045 -0.0237 0.0156 -0.1352
1598_g_at -0.0541 0.0006 -0.1369 -0.0589
160020_at -0.0925 0.2182 -0.1967 -0.0074
1729_at -0.0017 -0.2209 -0.086 -0.0709
1773_at -0.0273 -0.0181 0.1042 0.0136
177_at -0.0276 -0.2563 0.3975 -0.0535
179_at -0.0472 0.0979 -0.216 -0.2814
1861_at 0.0121 -0.4038 0.0016 0.0334
200000_s_at -0.1021 -0.0887 -0.0452 0.0035
```
Lets say the name of this matrix is `M.txt` and the selected rows is in a list named `mSelected.txt` (consisting of `1438_at`, `1729_at` and `200000_s_at`). My output should look like the following file
```
Affy ID DDM1 DGKI2 FDGYYY1 GUHIL6
1438_at 0.0635 0.2065 -0.2112 0.0856
1729_at -0.0017 -0.2209 -0.086 -0.0709
200000_s_at -0.1021 -0.0887 -0.0452 0.0035
```
Is there also anyway to convert their Affy ID to Gene name?
```
$ head /Users/Desktop/mSelected.txt | cat -vet
1438_at^M1729_at^M200000_s_at
$ head /Users/Desktop/m.txt | cat -vet
Affy ID ^I DDM1 ^IDGKI2 ^IFDGYYY1 ^I GUHIL6^M1438_at^I0.0635^I0.2065^I-0.2112^I0.0856^M1487_at^I0.071^I-0.1315^I0.0263^I0.0198^M1494_f_at^I0.0045^I-0.0237^I0.0156^I-0.1352^M1598_g_at^I-0.0541^I0.0006^I-0.1369^I-0.0589^M160020_at^I-0.0925^I0.2182^I-0.1967^I-0.0074^M1729_at^I-0.0017^I-0.2209^I-0.086^I-0.0709^M1773_at^I-0.0273^I-0.0181^I0.1042^I0.0136^M177_at^I-0.0276^I-0.2563^I0.3975^I-0.0535^M179_at^I-0.0472^I0.0979^I-0.216^I-0.2814^M1861_at^I0.0121^I-0.4038^I0.0016^I0.0334^M200000_s_at^I-0.1021^I-0.0887^I-0.0452^I0.0035
```
|
awk 'FNR==NR{a[$0];next}($1 in a)' mSelected.txt M.txt
If the files are huge, join could be the fastest way:
join -t "<TAB>*" -1 1 -2 1 -o 1.1,1.2,1.3,1.4,1.5 <(sort -k1,1 M.txt) <(sort -k1,1 mSelected.txt) > output
*literal tab = ctrl-v-tab
|
biostars
|
{"uid": 129257, "view_count": 2765, "vote_count": 1}
|
I am running bowtie2 for mate pairs specifying this parameter:
bowtie2 --local --phred64 --threads 25 -x "rest of the command"
It gives me this error:
```
Saw ASCII character -126 but expected 64-based Phred qual.
Try not specifying --solexa1.3-quals/--phred64-quals.
terminate called after throwing an instance of 'int'
bowtie2-align died with signal 6 (ABRT)
```
I removed the parameter `--phred64`, but it gives another error:
```
Saw ASCII character -126 but expected 33-based Phred qual.
terminate called after throwing an instance of 'int'
bowtie2-align died with signal 6 (ABRT)
```
Any idea?
Thanks
|
I struggled also several times with this kind of problem. Somehow (I don't know why) you may have a non-ascii character in your fastq file.
And for some reason (which I also don't know) bowtie2 explains that it has the ascii code "-126" which does not exist, as all ascii signs have to be positive.
It is relatively likely that you have non-ascii character in your quality lines of your fastq file.
You can search for a line containing a non-ascii character by using the following commands. If the file is gziped use this:
<code>
zcat FILE.fastq.gz | perl -e '$line = 1; while (<>) {if(/[^[:ascii:]]/) {print "LINE: $line\n$_";} $line++;}'
</code>
Otherwise this:
<code>
perl -e '$line = 1; while (<>) {if(/[^[:ascii:]]/) {print "LINE: $line\n$_";} $line++;}' -f FILE.fastq
</code>
Let's say you received such an output: <br />
<code>
LINE: 21410096<br />
CCCCCGGGGGGGGGG�GGGG
</code>
You non-ascii sign will be depicted as this question-mark symbol.
After receiving the responsible line you can "enlarge" the "surrounding area" by using sed:
<code>
sed -n '21410093,21410096p' FILE.fastq
</code>
So we got something like this as an output:<br />
<code>
@sequence HEADER<br />
CTGGCTGGGAAGGGGCTGGCT<br />
+quality HEADER<br />
CCCCCGGGGGGGGGG�GGGG
</code>
Such an corrupted entry can easily be removed by using sed. In this case the corrupted read was located at the lines
21410093 to 21410096, so the following command will work:
<br />
<code>
sed -n '21410093,21410096d' Fasta.fastq > repaired.fastq
</code>
This would be problematic if you have paired-end reads. Either remove the read entry at the same position in the corresponding fastq file, or replace the corrupting character in you initial fastq file. I would prefer the latter:
This can be done by using sed:
<br \>
<code>
sed '21410096s/CCCCCGGGGGGGGGG�GGGG/CCCCCGGGGGGGGGGIGGGGG' FILE.fastq >FILE.repaired.fastq
</code>
(actually I'm surprised that this one worked).
I hope this helps.
|
biostars
|
{"uid": 119759, "view_count": 4725, "vote_count": 1}
|
Hi,
I have a text file of 7500 genomic positions with their chr, start, end and want to get their nucleotide sequences. Can someone point me some tool or any thoughts on how to do it?
Thank you,
|
https://www.biostars.org/p/194822/#194987
|
biostars
|
{"uid": 196364, "view_count": 3541, "vote_count": 3}
|
I am trying to extract only the columns I need from VCF preserving VCF structure, its header, its formatting. I am using bcftools. I tried doing:
```
bcftools annotate -c CHROM,POS,ID,REF,ALT,QUAL,FILTER,INFO/AF,INFO/AC,INFO/AN Holland.vcf -o Holland_selected_cols.vcf
```
But the output file just stays the same. Then I tried `query`:
```
bcftools query -f'[%CHROM\t%POS\t%ID\t%REF\t%ALT\t%QUAL\t%FILTER\t%INFO/AF;%INFO/AC;%INFO/AN\n]' -H Holland.vcf -o Holland_selected_cols.vcf
```
But it does not preserve `VCF` header. What would be the right `bcftools` command for that?
|
Use `bcftools annotate -x` to remove all fields, except those you want to keep:
bcftools annotate -x ^INFO/AF,^INFO/AC,^INFO/AN,^FORMAT input.vcf
|
biostars
|
{"uid": 408167, "view_count": 4295, "vote_count": 1}
|
Hello,
I know the following question have been asked many times by now, but the suggested solutions were not helpful.
I have a set of taxIDs and I want to convert them to taxonomies, but I do not want to download any extra databases. Is there any simple way to do this via terminal?
|
Using [Entrezdirect][1]:
$ esearch -db taxonomy -query "4932 [taxID]" | efetch -format nativemode xml | xtract -pattern Taxon -block "*/Taxon" -unless Rank -equals "no rank" -tab "\n" -element Rank,ScientificName
superkingdom Eukaryota
kingdom Fungi
subkingdom Dikarya
phylum Ascomycota
subphylum Saccharomycotina
class Saccharomycetes
order Saccharomycetales
family Saccharomycetaceae
genus Saccharomyces
OR
$ esearch -db taxonomy -query "4932 [taxID]" | efetch -format native -mode xml | grep ScientificName | awk -F ">|<" 'BEGIN{ORS=", ";}{print $3;}'
Saccharomyces cerevisiae, cellular organisms, Eukaryota, Opisthokonta, Fungi, Dikarya, Ascomycota, saccharomyceta, Saccharomycotina, Saccharomycetes, Saccharomycetales, Saccharomycetaceae, Saccharomyces
[1]: http://bit.ly/entrez-direct
|
biostars
|
{"uid": 423201, "view_count": 3269, "vote_count": 1}
|
**I have two lists, lista and listb,**
**lista contains three elements, each of their class() is frame:**
frame1, frame2, frame3. Each frame nrows = 50, ncols = 100
**listb contains three elements, each of their class() is numeric:**
vector1, vector2, vector3. Each vector length is 50
**I want to do**
vector1 replace the 100th col (last column) of frame1,
vector2 replace the 100th col of frame2,
vector3 replace the 100th col of frame3.
How can I realize is not by for cycle use R? Many thanks!
I search this question for several hours,but seems no similar can be reference.
yes,for R, it's
lista[[1]][, 100] <- listb[[1]]
lista[[2]][, 100] <- listb[[2]]
lista[[3]][, 100] <- listb[[3]]
here 1,2,3 is toy example, what I will process more than 200
|
We can use *mapply*, see this example:
# example data
set.seed(1);lista <- list(frame1 = data.frame(x = letters[1:2], y = runif(2)),
frame2 = data.frame(x = letters[3:4], y = runif(2)),
frame3 = data.frame(x = letters[5:6], y = runif(2)))
listb <- list(c(1, 11), c(2, 22), c(3, 33))
lista
# $frame1
# x y
# 1 a 0.2655087
# 2 b 0.3721239
#
# $frame2
# x y
# 1 c 0.5728534
# 2 d 0.9082078
#
# $frame3
# x y
# 1 e 0.2016819
# 2 f 0.8983897
listb
# [[1]]
# [1] 1 11
#
# [[2]]
# [1] 2 22
#
# [[3]]
# [1] 3 33
mapply(function(x, y){
x[ ncol(x) ] <- y # Assign y to the last column, in your case ncol(x) will return 100.
x},
lista, listb, SIMPLIFY = FALSE)
# $frame1
# x y
# 1 a 1
# 2 b 11
#
# $frame2
# x y
# 1 c 2
# 2 d 22
#
# $frame3
# x y
# 1 e 3
# 2 f 33
|
biostars
|
{"uid": 325880, "view_count": 681, "vote_count": 2}
|
<p>I have been asked to recommend introductory books and resources to R and Bioconductor. My problem is just, I never read a book to learn R or Bioconductor, so I have no experience with this and cannot recommend one. I am interested in mainly introductory books, possibly targeting various groups of readers (computer scientists, molecular biologists, (bio-)statisticians), any recommendation appreciated. </p>
<p>For example, I used the following resources:</p>
<ul>
<li>The <a href='http://cran.r-project.org/manuals.html'>R-manuals</a>, especially <a href='http://cran.r-project.org/doc/manuals/R-intro.html'>the R intro</a></li>
<li>There are also <a href='http://cran.r-project.org/other-docs.html'>a lot of contributed documents there</a> on the R web site, but I didn't
use.</li>
<li>If a package from Bioconductor interests me, I read the package vignette.</li>
<li>I read the Bioconductor mailing list, that helps to see what other people use.</li>
<li>I have the "<a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98966-2'>Venebles, Ripley. S Programming</a>" book, that is hardly introductory. </li>
</ul>
<p>Which books did you find helpful or completely useless to learn R/Bioconductor? For example: <a href='http://www.bioconductor.org/pub/RBioinf/'>R Programming for Bioinformatics</a> looks promising, anybody read it?</p>
<p>Or do you share my reluctance towards R-books and prefer online resources?</p>
|
<p><a href="http://www.bioconductor.org/help/publications/books/r-programming-for-bioinformatics/">R programming for bioinformatics</a> By Robert Gentleman</p>
|
biostars
|
{"uid": 539, "view_count": 23484, "vote_count": 73}
|
When aligning ATAC-seq reads it is helpful to specify the maximum insert size. This allows the aligner to correctly mark whether the distance between mates is "proper" and in turn will set the "PROPER_PAIR" flag in the SAM file. With Bowtie2 you can specify this size with the -X flag. Is it possible to set this option with BWA MEM?
|
I recommend that people don't rely on the "proper" flag. It is not a well-defined concept.
Typically we never know for sure what is really going on. It is basically an illusion, where we give up control to the developer and assume that "proper" meant the same to them as for us. We never fully understand all the decisions and corner cases that go with it.
The 9th (TLEN) column of a SAM file contains the template length - you can filter your SAM alignments by that column in parallel to other well-defined flags. But I do agree it is a lot less convenient to do so than filtering by a flag and I wish "proper pair" would be a better defined.
|
biostars
|
{"uid": 285337, "view_count": 2696, "vote_count": 2}
|
Hi,
For running [ActivedriverWGS](https://github.com/reimandlab/ActiveDriverWGS) software I will need coding or non coding parts of genome in BED12 format. I have found coding part of genome (in txt format though). But I don't know how to find non coding part of genome (BED12 format) also Transcription factor binding in BED4 format. I have contacted the developer but no response. Any suggestion please?
|
https://elifesciences.org/download/aHR0cHM6Ly9jZG4uZWxpZmVzY2llbmNlcy5vcmcvYXJ0aWNsZXMvMjE3NzgvZWxpZmUtMjE3Nzgtc3VwcDQtdjMueGxzeA==/elife-21778-supp4-v3.xlsx?_hash=KQi5jfO3kT2c4Qw44j4Rg6YAyCBQilYuWHVYXcRDuuo%3D
|
biostars
|
{"uid": 365086, "view_count": 1661, "vote_count": 1}
|
<p>Hi,</p>
<p>I have a couple of contigs, and I would like to answer questions of the following sort</p>
<p>a.) Are they unique?</p>
<p>b.) Which organism do they come from?</p>
<p>c.) Do they come from a protein encoding gene?</p>
<p>d.) What is that protein's function etc...</p>
<p>I am very new to Bioinformatics (even Bio for that matter) so any help would be appreciated. Even links to further readings.
Thanks.</p>
|
<p>A contig is DNA sequence corresponding to a region of a genome, and it is an intermediate state in the sequencing and assembly of a genome. </p>
<blockquote>
<p>a.) Are they unique?</p>
</blockquote>
<p>This question is not clear. Unique with respect to what, to other known contigs? Consider that contigs are an intermediate product of the assembly of a genome, and not all the contigs are deposited into public databases. </p>
<p>The best thing you can do is a nucleotide blast of your sequence against the nr/nt database from NCBI. Use <a href="http://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastn&BLAST_PROGRAMS=megaBlast&PAGE_TYPE=BlastSearch&SHOW_DEFAULTS=on&LINK_LOC=blasthome">this link</a> and select it from the 'Databases' menu. </p>
<blockquote>
<p>b.) Which organism do they come from?</p>
</blockquote>
<p>This is really a difficult question. It is very strange that you have a contig and you don't know from which species it comes from. You can do a blast against nr/nt or Refseq/genomic and see in which species you get the best score and p-value. Consider that since we only have few genomes sequenced for each specie, we don't know enough about intraspecific variability and moreover your contig could belong to a non-sequenced specie. You will only be able to make a guess. you will need to look carefully at the alignments and at the p-values.</p>
<blockquote>
<p>c.) Do they come from a protein
encoding gene?</p>
</blockquote>
<p>Contigs do not come from the sequencing of a gene. They correspond to genomic regions and are variable in size and contents. A contig could contain many genes or none. You can do a blast against RefSeq transcripts, but it won't be an optimal solution because mRNAs don't align well on genomic regions. </p>
<blockquote>
<p>d.) What is that protein's function
etc...</p>
</blockquote>
<p>the same as above.</p>
|
biostars
|
{"uid": 3634, "view_count": 9319, "vote_count": 2}
|
Hello,
ref --------------start-----------------------------------------stop-------------------------------
r1 -------------------------------------------------------
r2 ---------------------------------------------------------------
r3 ----------------- --------------------------------------
r4 -----------------------------------------------------------------------------
a. I would like to extract reads from bam file that overlap entirely the start and stop region from both directions (from start to stop and from stop to start). From the example above, I need to keep only r1,r2 and r4, but not r3.
How to do this?
I tried this, but I still have some small fragmented reads...
samtools view -b -h -q 10 input.bam chrX:230-330 | awk 'BEGIN{OFS="\t"}{if($1 ~ /^"@"/) {print} else {if($4 >= 230) {print} else {}}}' | samtools view -Sbo output.bam
I also tried:
samtools view -h -q 10 input.bam chrX:230-330 | awk 'BEGIN{OFS="\t"}{if($1 ~ /^"@"/) {print} else {if($4 >= 230 && length($10) >= 100) {print} else {}}}' | samtools view -Sbo output.bam -
|
Create a BED file that contains the start and stop positions, then use [BEDtools][1]:
bedtools intersect -wa BAM -b BED -F 1.0
[1]: https://github.com/arq5x/bedtools2
|
biostars
|
{"uid": 9470726, "view_count": 1451, "vote_count": 1}
|
I have a Total RNA TrueSeq Illumina Stranded Paired-end Ribo0 Library.
I'm using Star to get a matrice of reads count.
I dont know which column count from output to take into account to make differential gene expression analysis.
As they say [here][1]
> Outputs read counts per gene into ReadsPerGene.out.tab file with 4
> columns which correspond to different strandedness options:
>
> column 1: gene ID
>
> column 2: counts for unstranded RNA-seq
>
> column 3: counts for the 1st read strand aligned with RNA (htseq-count
> option -s yes)
>
> column 4: counts for the 2nd read strand aligned with RNA (htseq-count
> option -s reverse)
>
> Select the output according to the strandedness of your data.
>
> Note, that if you have stranded data and choose one of the columns 3
> or 4, the other column (4 or 3) will give you the count of antisense
> reads.
I launch rseqc infer_experiment.py to check library preparatin of my data.
> It returns that fraction of reads were explained by the following
> combination 1+-,1-+,2++,2-- meaning :
>
> read1 mapped to ‘+’ strand indicates parental gene on ‘-‘ strand
>
> read1 mapped to ‘-‘ strand indicates parental gene on ‘+’ strand
>
> read2 mapped to ‘+’ strand indicates parental gene on ‘+’ strand
>
> read2 mapped to ‘-‘ strand indicates parental gene on ‘-‘ strand
From what I understood, column4 (htseq-count -s reverse) seems to be the good count regarding the library. I have selected column 4.
But I was wondering what about the column 3 (which describe antisense reads).
For differential gene expression analysis, which count I need to use ?
What to do with column 3 antisense reads ? Is it a kind of artefact ? or do I have to pull column 3 and 4 together ?
Thanks for your help.
[1]: https://groups.google.com/forum/#!topic/rna-star/gZRJx3ElRNo
|
You want column 4. Column 3 will presumably give you funky results when you have overlapping genes. Don't add anything together, just directly use column 4.
|
biostars
|
{"uid": 215267, "view_count": 2585, "vote_count": 2}
|
I have a large fasta file of 16S sequences and I want to retrieve sequences using a list of organism names. Do you know a script capable of doing it?
EDIT:
Headers look like that:
```
>S000000859 Bacillus sp. USC14; AF346495
sequence
>S000001027 Paenibacillus borealis; KN25; AJ011325
sequence
```
And I have a list like the following:
```
Paenibacilus borealis
Paenibacillus sp. 1-18
Paenibacillus sp. 1-49
Paenibacillus sp. A9
Paenibacillus sp. Aloe-11
```
I want to retrieve those sequences that match with names present in the list.
|
The following script should work:
```
#!/usr/bin/env perl
my $list_file = $ARGV[0];
my $fasta_in = $ARGV[1];
my $fasta_out = $ARGV[2];
open(LIST_FILE, "<", $list_file) or die "could not open '$list_file' : $! \n";
open(FASTA_IN, "<", $fasta_in) or die "could not open '$fasta_in' : $! \n";
open(FASTA_OUT, ">", $fasta_out) or die "could not open $fasta_out : $! \n";
my @headers = ();
while(<LIST_FILE>) {
chomp;
next if ( /^\s*$/ );
push(@headers, $_.";");
}
my $pat = join '|', map quotemeta, @headers;
$/ = ">";
while(<FASTA_IN>) {
chomp;
if ( /$pat/ ) { print FASTA_OUT ">$_"; }
}
close(LIST_FILE);
close(FASTA_IN);
close(FASTA_OUT);
```
Call it with:
./getFastas.pl list.txt sequences.fasta sequences.out
|
biostars
|
{"uid": 141241, "view_count": 14949, "vote_count": 3}
|
I just created a **b**ed file from the GATK walker VariantsToBinaryPed. It was of course supposed to be a **p**ed file. Now I can't use it as input to phaseByTransmission. My question is: is the file crap even if I change the file ending afterwards? If so, are all filetypes so sensitive?
And while I'm at it, an irrelevant bonus question: has anyone tried phaseByTransmission with more than a trio?
Edit: my problem was the `-ped` option. It was typed `-bed`. I'm having troubles typing *ped* here as well. My brain is telling me to go to sleep.
|
<p>Ha! This is a confusing question. The "mansplainer" in me wants to tell you that file extensions don't matter one bit, although in this case file extensions do matter, and some understanding of the file *format* is needed.</p>
<p>Most people will be familiar with .bed files as text files representing interval data. In your case, you are creating a <a href="http://pngu.mgh.harvard.edu/~purcell/plink/data.shtml#bed">binary version of a .ped</a> text file, which represents a pedigree for genetic analysis. Your .bed pedigree file does not equal other .bed interval files, so in this case the file extension is not only arbitrary but also confusing. Most importantly you *cannot* change the file extension from .bed to .ped and use the resulting file for software that expects the text representation of a pedigree.</p>
|
biostars
|
{"uid": 112446, "view_count": 1885, "vote_count": 1}
|
Does anyone know of a tool for converting SNPs in VCF format to amino acid mutations in UniProt proteins?
----
I know `snpEff` can do this for Ensembl variants.
For example, for the VCF file with the line:
1 69538 COSM75742 G A . .
`snpEff` adds the following annotation:
1 69538 COSM75742 G A . . ANN=A|missense_variant|MODERATE|OR4F5|ENSG00000186092|transcript|ENST00000335137.3|protein_coding|1/1|c.448G>A|p.Val150Met|448/918|448/918|150/305||
I am looking for something that would give me the UniProt ID and the protein mutation mapped to the UniProt sequence.
|
The best tool that I could find for annotating VCF files with UniProt mutations is [Oncotator](http://portals.broadinstitute.org/oncotator/). It explicitly provides "Site-specific protein annotations from UniProt".
Alternatively, you can annotate VCF files with *Ensembl* mutations, and then map Ensembl to Uniprot using pairwise sequence alignments between proteins mapped to the same gene.
|
biostars
|
{"uid": 197549, "view_count": 3447, "vote_count": 3}
|
Hello, I'm fighting with my awk command since yesterday.
I have a file (locus.txt), this is some IgH locus from mm10 (I don't have header but you have : chr, start, end, strand and name_of_the_locus, separated by tab)
chr12 113363298 113365156 - gamma3
chr12 113330756 113338695 - gamma1
chr12 113308036 113314227 - gamma2b
chr12 113274557 113277035 - gammaepsilon
chr12 113260153 113264625 - alpha
chr12 113289248 113295541 - gamma2a
chr12 113423027 113426701 - muIgh
chr12 113225832 113255223 - 3'RR
chr12 113416247 113418358 - IgD
What I want to do is to grab the minimum position in this file, so the minimum position in start column (second column : `113225832`, for 3'RR)
Then, I want to substract all my position with this minimum and rearrange the file like this
gamma3 137466 139324
gamma1 104924 112863
...etc
----------
**What I have tried so far**
Search for minimum value, saved in $min :
min=`awk -v min=1000000000 '{if($2<min){min=$2}}END{print min}' locus.txt`
Then substract position and rearrange the file :
awk -F $'\t' '{$1=$4=""; print $5"\t"$2-$min"\t"$3-$min}' locus.txt
But I got this :
gamma3 0 1858
gamma1 0 7939
gamma2b 0 6191
gammaepsilon 0 2478
alpha 0 4472
gamma2a 0 6293
muIgh 0 3674
3'RR 0 29391
IgD 0 2111
The only correct result is `29391` for 3'RR
Seems not like a complex problem but I can't find a way out of this...
I bet on a casting problem but i'm not even sure. Thanks for your help !
|
## First get the minimum:
MIN=$(bc <<< $(sort -k2,2n in.file | awk 'NR == 1 {print $2}'))
## Then subtract and rearrange:
awk -v min=$MIN 'OFS="\t" {print $5, $2-min, $3-min}' in.file > out.rearranged
|
biostars
|
{"uid": 333887, "view_count": 1018, "vote_count": 2}
|
Do several similar motifs add to the strength of evidence, or is the information from similar motifs redundant?
For example, does the similarity of motifs 1 & 3 (from Homer2) strengthen the evidence for each ? ![enter image description here][1]
Also, in the image above, motif #4 is the reverse complement of motifs 1&3. Is that additional evidence, or mere redundancy ?
[1]: /media/images/47579a32-38cd-4045-97fd-3101de9f
|
The concept of proteins binding a given strand of DNA is our own mental concoction, and doesn't match reality. Any binding site that is at least 5-6 bases long will have the first and last base rotationally separated by at least 180 degrees because of DNA twisting. Given that most proteins don't wrap around DNA and instead bind it it from one side, it is almost a given that they will interact with bases and phosphates from both strands. For our convenience we usually represent binding sites so they match whatever is the upper strand with regard to transcription that is regulated from that binding site. There are two problems with that: 1) we don't always know whether the binding site in a converging intergenic region regulates the top-strand gene going to the right, or a bottom-strand gene going to the left; 2) not everyone designates the binding site strand with regard to downstream genes.
This is a long-winded way of saying that your motif #4 is almost guaranteed to be the same thing as motif #3. They are so perfectly symmetrical when invoking reverse complementarity that it would difficult to explain it otherwise.
|
biostars
|
{"uid": 9522341, "view_count": 584, "vote_count": 1}
|
<p>After a MAKER run with 3 ab initio predictors and using fasta_merge -d on the resulting log file, I get 4 output files - one for each ab initio annotator, and one called "Genome.maker.proteins.fasta" which looks like the "union" of the three ab initio predictors. However, at least one of the ab initio annotation programs output has many more proteins than the final "Genome.maker.proteins.fasta" output.</p>
<p>I first thought it's just proteins with AED != 1 in the final output but proteins with AED=1 are still abundant. Other filtering flags like min_protein etc. are set to 0, so it doesn't filter these out as well (standard maker_opts.ctl). It looks like it filtered relatively short proteins (<10AA) from my ab initio predictions, but there's no indication about this in my options.</p>
<p>I can't find anything on this in the devel lists or the wiki, is there any other filtering step done by MAKER I'm not seeing right now?</p>
|
Have you viewed those ab initio predictors as a separate track in something like IGV and compared them to the final gene model track? If you are only looking at number maybe your not getting the entire picture. The reason you might have fewer final proteins than you have ab initio predictions is because maker tries to create a consensus gene model based on all the evidence so multiple smaller evidence models can still result in one final gene model.
Did you use the option `always_complete=1`? Maybe if the ab initio model does not contain a start / stop codon it might be discarded in the final product.
It could also be that MAKER had conflicting evidence for some models, for example if three tracks that predict formation A and one track that wants another formation... MAKER will than pick the best gene model and that will be the one that has the evidence.
|
biostars
|
{"uid": 147752, "view_count": 2976, "vote_count": 1}
|
Hi there,
Fairly new to this area so will try to explain as clearly as possible. I have two RNA-Seq data sets - one corresponds to a series of cancer cell lines, the other to cell lines we are using as a model of 'normal' epithelium. The expression units of one data set are in FPKM the other RPKM. I think I superficially understand the difference between these units, in that one is used in mapping transcripts in single end sequencing the other paired end sequencing.
My question is are these units directly comparable? The analysis I wish to carry out is fairly straight forward - I have a predefined list of around 500 genes, and simply want to compare differences in expression between the non-cancer/cancer background, in terms of which of these genes they are expressing at all as well as the relative expression levels. For which transcripts are expressed I had intended to use any value over 0 (FPKM or RPKM) as denoting expression of a transcript, but am unsure if I can compare relative abundance.
I should add that I only have access to the raw data of one dataset, the other is as a results table sent by collaborators.
|
As already mentioned, 2*RPKM should be equal to FPKM , **if** you are dealing with paired end reads in both cases.
FPKM/RPKM was intended as a approximate concentration value. **Within** an experiment, you can make comparisons between transcripts. *Between* experiments you should rather use TPM for transcripts or raw counts for genes (these vales are usually normalised with the tools like DESeq2 or edgeR).
Moreover, if you use different types of software to compute your bad numbers, comparisons will become even harder. Cufflinks uses an internal length normalisation and coverage based assumptions which you have to take into consideration when comparing.
Try to get the raw data from your collaborator, or at least the alignments.
Cheers
Michael
|
biostars
|
{"uid": 229236, "view_count": 4673, "vote_count": 1}
|
I used to do the GO analysis using these R bioconductor packages: biomaRt, clusterProfiler
First, I need to build the GO map of the bacteria genome.
Then, use the clusterProfiler to finish the enrichment analysis. (GO enrichment, KEGG enrichment ...)
My problem is: **biomaRt does not support bacteria genomes anymore. so that I can not get the GO db for the rest analysis**.
The following are my * R scripts** that worked in 2012.
R code:
```
# load libraries
library(clusterProfiler)
# Build specific GO map using GFF file
library(biomaRt)
Gff2GeneTable("NC_000962.gff")
load("geneTable.rda")
Mtb <- useMart(biomart="bacteria_mart_16",
dataset="myc_30_gene")
gomap <- getBM(attributes=c("entrezgene", "go_accession"),
filters="entrezgene",
values=geneTable$GeneID,
mart=Mtb)
#dim(gomap)
#head(gomap)
buildGOmap(gomap)
# Load the genes (differentially expressed, or other)
input_genes <- read.table("input_list.txt") # geneName geneID
input_IDS <- as.character(input_genes$geneID)
GOe <- enrichGO(input_IDs, organism = "H37Rv", ont = "BP",
pvalueCutoff = 0.05, qvalue = 0.1, readable = TRUE)
# make a plot
p1 <- plot(GOe, type = "bar", order = TRUE, showCategory = 15)
print(p1)
# write the results to a file
write.table(summary(GOe), file = "out_GOenrichment.txt", sep = "\t")
```
|
You can try to annotate your gene using [blast2GO][1] or retrieve the GO annotation in [quickGO][2] if the genome of your organism is present.
After having the GO annotation you can do the enrichment with the R package you prefer.
Look also the many previous questions on the topic:
- https://www.biostars.org/p/100223/
- https://www.biostars.org/p/96947/#96999
- https://www.biostars.org/p/47078/
[1]: https://www.blast2go.com/
[2]: http://www.ebi.ac.uk/QuickGO/
|
biostars
|
{"uid": 128809, "view_count": 8511, "vote_count": 1}
|
Does anyone know of a tool to convert a .BED file to a probes.txt file?
This file defines the chr:start-stop coordinates for the probes/targets in the exome capture. We suggest basing this off the published target files from the vendor, but a custom file may be necessary for some designs. You can download the standard probe file used in Krumm et al. <a href="http://sourceforge.net/projects/conifer/files/probes.txt/download">here</a>. Otherwise, the probes.txt file should be a tab-delimited file with the following header and columns:
chr start stop name
1 69090 70008 OR4F5
1 565876 566576
1 801642 802733
1 861321 861393 SAMD11
1 865534 865716 SAMD11
1 866418 866469 SAMD11
1 871151 871276 SAMD11
1 874419 874509 SAMD11
1 874654 874840 SAMD11
|
Thanks for updating your post. Just take the first 3 columns of the BED file and add a header. You might need to add an empty 4th column, try it and see. You can always annotated the BED file (have a look at bedtools) if conifer complains about an empty 4th column.
|
biostars
|
{"uid": 116036, "view_count": 2343, "vote_count": 1}
|
Hello folks,
I happen to have a small problem which seemed to be trivial at first, but keeps me busy for a while now already. Maybe you can help anew, because most Biostars posts on this issue are quite outdated:
Problem:
-------------------
I need to split 615M paired reads currently in two FastQ files (forward and reversed reads separate) into two file pairs with 308M reads.
Solution attempt A:
-------------------
I unsuccessfully tried to use line count based tools like `split` or `awk`, but the `wc -l` does not even count them correctly (line counts differ), although they are processed nicely by several tools. By googling, I found that a possible reason for this behavior might be that newline characters may occur in the quality scores, which thus hinder the core utilities.
Solution attempt B:
-------------------
I tried to use my `BBTools`, my favorite Swiss army knife of everything sequence related, but...
bbmap/reformat.sh in=... in2=... out=... out2=... reads=308000000
bbmap/reformat.sh in=... in2=... out=... out2=... skipreads=308000000
resulted in
Input is being processed as paired
Input: 615307122 reads 91326404105 bases
Output: 615307122 reads (100.00%) 91326404105 bases (100.00%)
respectively
Input is being processed as paired
Input: 615307122 reads 91326404105 bases
Output: 0 reads (0.00%) 0 bases (0.00%)
for the second command, such that I got a copy of my data in the first case and an empty file in the second.
Solution attempt C:
-------------------
Here on Biostars, somebody had suggested `famas` for a similar problem, so I installed and tried it:
famas --in=... --in2=... --out=...XXXXXX.fq.gz --out2=...XXXXXX.fq.gz -x 308000000
However, the program flooded to output directory with thousands of subfiles files (instead of the actually needed two files each) until the file system couldn't cope with the number of open files anymore and ran out of file descriptors.
ERROR(famas.c|open_output_one:1056): Couldn't open =...compressed.065534.fq.gz
ERROR(famas.c|main:1163): Couldn't open output files. Exiting...
*In case somebody is wondering why I mixed the long and short notation here: the parameter --split-every=308000000 was not recognized (unknown parameter).*
Since it took me a while to clean that mess up again, I am somewhat reluctant to try out more. Does sombeody have any ideas where I screwed up the commands or has suggestions which tools work better?
Thanks a lot for reading, pondering and help!
Matthias
|
with [seqkit][1] from the manual:
$ seqkit split2 -1 reads_1.fq.gz -2 reads_2.fq.gz -p 2 -O out
out is output directory, reads_1 is R1 and reads_2 is R2. This function is to break the file into two parts. try `-s` option for sequence number based split.
[1]: https://bioinf.shenwei.me/seqkit/download/
|
biostars
|
{"uid": 451453, "view_count": 2799, "vote_count": 1}
|
Has anyone had any luck making calls to the UCSC genome API for DNA sequences from the T2T CHM13v2.0 (hs1) assembly? I can easily make calls for hg38 (e.g. api.genome.ucsc.edu/getData/sequence?genome=hg38;chrom=chr1;start=4321;end=5678), however making a call to the hs1 genome doesn't seem to be working. I've checked the list of available UCSC genomes (found here https://api.genome.ucsc.edu/list/ucscGenomes) and hs1 seems to be a valid genome to query, however I am only met with "Bad Request" status messages whenever I try to use it.
I'm sure that I'm just missing something small but some help would be greatly appreciated.
Thanks in advance!
|
I couldn't access hs1 too but there is another hub for T2T CHM13v2.0
Listing chromosomes
❯ curl 'api.genome.ucsc.edu/list/chromosomes?hubUrl=https://hgdownload.soe.ucsc.edu/hubs/GCA/009/914/755/GCA_009914755.4/hub.txt;genome=GCA_009914755.4'
Response:
{ "downloadTime": "2022:12:23T22:40:38Z", "downloadTimeStamp": 1671835238, "hubUrl": "https:\/\/hgdownload.soe.ucsc.edu\/hubs\/GCA\/009\/914\/755\/GCA_009914755.4\/hub.txt", "genome": "GCA_009914755.4", "chromCount": 25, "chromosomes": { "CP068254.1": 16569, "CP068257.2": 45090682, "CP068256.2": 51324926, "CP068259.2": 61707364, "CP086569.2": 62460029, "CP068258.2": 66210255, "CP068260.2": 80542538, "CP068261.2": 84276897, "CP068262.2": 96330374, "CP068263.2": 99753195, "CP068264.2": 101161492, "CP068265.2": 113566686, "CP068266.2": 133324548, "CP068268.2": 134758134, "CP068267.2": 135127769, "CP068270.2": 146259331, "CP068269.2": 150617247, "CP068255.2": 154259566, "CP068271.2": 160567428, "CP068272.2": 172126628, "CP068273.2": 182045439, "CP068274.2": 193574945, "CP068275.2": 201105948, "CP068276.2": 242696752, "CP068277.2": 248387328} } %
get sequence:
❯ curl 'api.genome.ucsc.edu/getData/sequence?hubUrl=https://hgdownload.soe.ucsc.edu/hubs/GCA/009/914/755/GCA_009914755.4/hub.txt;genome=GCA_009914755.4;chrom=CP068254.1;start=5;end=10'
Response:
{ "downloadTime": "2022:12:23T22:41:22Z", "downloadTimeStamp": 1671835282, "hubUrl": "https:\/\/hgdownload.soe.ucsc.edu\/hubs\/GCA\/009\/914\/755\/GCA_009914755.4\/hub.txt", "genome": "GCA_009914755.4", "chrom": "CP068254.1", "start": 5, "end": 10, "dna": "TGTAG"} %
|
biostars
|
{"uid": 9549320, "view_count": 528, "vote_count": 1}
|
Dear all,
Now we used ANNOVAR to annotate our SNP detected. But the result of ANNOVAR is the single-letter Amino acid abbreviation. It's like this:
<a href="https://ibb.co/nsQqBT"><img src="https://preview.ibb.co/jJ6ad8/menu_saveimg_savepath20180521134604.jpg" alt="menu_saveimg_savepath20180521134604" border="0"></a><br />
The results I want is like this:
<a href="https://ibb.co/nveYWT"><img src="https://preview.ibb.co/kN40BT/menu_saveimg_savepath20180521134855.jpg" alt="menu_saveimg_savepath20180521134855" border="0"></a><br />
So does anyone knows that which annotation software can feed back this 3-letter amino acid abbreviation? Thanks in advance.
Best wishes,
|
Hello,
assuming that the annovar output file is a tab separated file, here is a `python` script. Save it as `one2three.py` and run it like this:
$ python one2three.py annovar.csv > annovar_fixed.csv
https://gist.github.com/finswimmer/76bdc20b35ffe5d41bf6a52ff21e1d4b
fin swimmer
|
biostars
|
{"uid": 316062, "view_count": 2141, "vote_count": 1}
|
Hi all, During Differential gene expression analysis of RNASeq data (DESEq2 or Cufdiff) which is best method to filter differentially expressed genes? Should I go with all the genes having adjusted P value < 0.05 or should I filter them based on a log2 Fold change cut-off?
Thank you
|
biostars
|
{"uid": 272557, "view_count": 3751, "vote_count": 1}
|
|
Hello
I have run snp2hla in my dataset and got the imputed results. But I am unable to interpret it. The softwares website also does not have sufficient information. Could anyone help me on this. Thanks in advance.
|
The output has P or A against the haplotype which indicates either presence or absence of haplotype
|
biostars
|
{"uid": 292707, "view_count": 1672, "vote_count": 2}
|
I would like to get sample distances between different samples of an RNA-seq experiment. Read that VST and rlog function of DEseq R package were good to make a correction so that standard deviation of expression of a gene across all samples doesn't change with the mean (of expression of that gene across all samples).
My questions are:
1 - Should these corrections be applied after normalising raw counts for sequencing depth (with the DESeq() function) or directly applied on the raw data?
2 - To do a heatmap with a dendrogram representing the distances between samples, is it better to plot in a heatmap the values corrected with VST/rlog or FPKM values?
3 - 'VST' method seems to be better for big sets (n>30). I have 3 samples, so that means need to choose 'rlog' instead?
4 - In both methods we can set parameter 'blind'. Should I set it to 'TRUE' or 'FALSE' in which situations?
Regards.
|
> 1 - Should these corrections be applied after normalising raw counts
> for sequencing depth (with the DESeq() function) or directly applied
> on the raw data?
These should only be applied to the normalised expression levels ('counts'), as per the <a href="https://bioconductor.org/packages/3.7/bioc/vignettes/DESeq2/inst/doc/DESeq2.html">DESeq2 vignette</a>.
> 2 - To do a heatmap with a dendrogram representing the distances
> between samples, is it better to plot in a heatmap the values
> corrected with VST/rlog or FPKM values?
Don't use FPKM values - the method of normalisation that produces FPKM expression levels should no longer be used for multi-sample studies. Instead, use either the VST- or rlog-transformed counts. Please see the answer that I gave earlier today: https://www.biostars.org/p/328304/#328324
**Edit April 21, 2020:** if you must use FPKM in a heatmap or for any downstream application, I would transform them to z scale via *zFPKM* package
> 3 - 'VST' method seems to be better for big sets (n>30). I have 3
> samples, so that means need to choose 'rlog' instead?
You can justify the use of either. rlog is not recommended for large datasets because it can take a very long time. I tend to check both, where possible, and find that results don't largely change between both of these methods (provided that there are no outliers in your dataset).
> 4 - In both methods we can set parameter 'blind'. Should I set it to
> 'TRUE' or 'FALSE' in which situations?
Please read: http://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#blind-dispersion-estimation
Kevin
|
biostars
|
{"uid": 328345, "view_count": 4292, "vote_count": 2}
|
Hi, in GSE131592, each file corresponds to the counting matrix of one sample. In each file there are three columns. Does anyone know how to get the count of this sample?
The picture shows GSM3790428 as an example. No reply from the dataset author yet.
![example][1]
[1]: /media/images/c34af3bb-79cc-4ea1-94e8-2661c114
|
That is htseq count data. The three columns represent the number of reads assigned to that gene assuming the library is 1) unstranded 2) stranded in the forward direction, 3) stranded in the reverse direction.
|
biostars
|
{"uid": 9492507, "view_count": 2039, "vote_count": 1}
|
I have a huge input file like this,
```
con1 Traes4 99.26 135 1 0 139 543 1 135 5.0 279 1506 135
con13 Traes7 97.61 544 13 0 482 2113 200 544 0.0 101 2392 544
con13 Traes8 100.00 467 0 0 713 2113 1 467 0.0 901 2392 467
con15 EMT18 95.27 148 7 0 73 516 35 48 6.0 256 2560 148
con15 EMT29 89.86 148 15 0 73 516 100 148 3.0 243 2560 48
```
I am trying to extract contigs that have full lengths and % identity > 30, but I also allow +-10 aa differences,
```py
start_to_end=(hsp[9]-hsp[8])+1
if abs(hsp[13]-start_to_end) < 10 and hsp[2] >= 30
```
However, I want the data be grouped by line[0]. For example, con13 has Traes8 that fits my request, and Traes7 that does not, but since I want all conX be grouped, it is enough that one of con13 meets my request, my output should print both of con13 in result1 and non of con13 should ever appear in result2 file. Note: I am also interested to print single con1 since it meets my requirement. Note 2: con13 might have >20 different Traes.
Here is my script, It prints results but separately without grouping by con, how can this be solved?
```py
from itertools import groupby
with open('file.txt') as f1:
with open('result1.txt', 'w') as f2:
with open('result2.txt','w') as f3:
for k, g in groupby(f1, key=lambda x:x.split()[0]):
hits = []
for line in g:
hsp = line.split()
hsp[9], hsp[13], hsp[8], hsp[2]= int(hsp[9]), int(hsp[13]),int(hsp[8]),float(hsp[2])
hits.append(hsp)
print line.rstrip()
#for I in range(1,len(hits)):
start_to_end=(hsp[9]-hsp[8])+1
if abs(hsp[13]-start_to_end) < 10 and hsp[2] >= 30:
f2.write(line.rstrip() + '\n')
else:
f3.write(line.rstrip() + '\n')
```
This is the result I get,Result1.txt
```
con1 Traes4 99.26 135 1 0 139 543 1 135 5.0 279 1506 135
con13 Traes8 100.00 467 0 0 713 2113 1 467 0.0 901 2392 467
```
Result2.txt
```
con13 Traes7 97.61 544 13 0 482 2113 200 544 0.0 101 2392 544
con15 EMT18 95.27 148 7 0 73 516 35 48 6.0 256 2560 148
con15 EMT29 89.86 148 15 0 73 516 100 148 3.0 243 2560 48
```
But my desired outputs should be like this, Result1.txt
```
con1 Traes4 99.26 135 1 0 139 543 1 135 5.0 279 1506 135
con13 Traes7 97.61 544 13 0 482 2113 200 544 0.0 1010 2392 544
con13 Traes8 100.00 467 0 0 713 2113 1 467 0.0 901 2392 467
```
Result2.txt
```
con15 EMT18 95.27 148 7 0 73 516 35 48 6.0 256 2560 148
con15 EMT29 89.86 148 15 0 73 516 100 148 3.0 243 2560 148
```
|
<p>The following seems to be more what you're going for:</p>
<pre><code>#!/usr/bin/env python
from itertools import groupby, tee
import csv
f = csv.reader(open("foo.txt", "r"), dialect="excel-tab")
of1 = csv.writer(open("results1.txt", "w"), dialect="excel-tab")
of2 = csv.writer(open("results2.txt", "w"), dialect="excel-tab")
for k,g in groupby(f, lambda x: x[0]) :
passed=0
g, g2 = tee(g)
for line in g :
if(float(line[2]) >= 30 and abs(int(line[13])-int(line[9])+int(line[8])+1) < 10) :
passed += 1
break
if(passed>0) :
for line in g2 :
of1.writerow(line)
else :
for line in g2 :
of2.writerow(line)
</code></pre>
<p>This will write an entire group to <code>f1</code> if any of its members pass the filtering step. Otherwise, all of the members of the group will be written to <code>f2</code>. Note that in your original example, each of the 3 groups had at least one member passing the filter (EMT29 passes, so all of contig 15 would pass). Of importance is the usage of <code>tee()</code>. Once you start using an iterator (e.g., <code>for line in g</code>) you can't then rewind or reuse it. Ways around that would be to (1) store everything in an array yourself or (2) just use <code>tee</code> to make a copy of the iterator for you. </p>
|
biostars
|
{"uid": 97623, "view_count": 6536, "vote_count": 1}
|
I am seeing some tools ask for these two types of inversions (head to head or tail to tail), but most callers just produce one breakpoint for each inversion without specifying if it is head-to-head or tail-to-tail. but some callers don't make that distinction and i'm not sure how to extract them. for example, in one delly output i have the following variant:
```
15 48518173 INV00004531 T <INV> . PASS PRECISE;SVTYPE=INV;SVMETHOD=EMBL.DELLYv0.8.2;CHR2=15;END=50467460;PE=8;MAPQ=60;CT=3to3;CIPOS=-4,4;CIEND=-4,4;SRMAPQ=60;INSLEN=0;HOMLEN=3;SR=7;SRQ=1;CONSENSUS=TATTGCCACAAATGGATGTGTCAGAGGAGGTAATTAAACTTGTGACCCGCACCATCATTTTTGGGTTTTTGTTCATGAAATCCTTGCATAAGCCAATGTCTA;CE=1.96136;RDRATIO=0.710181;SOMATIC GT:GL:GQ:FT:RCL:RC:RCR:CN:DR:DV:RR:RV 0/1:-20.9631,0,-99.9607:10000:PASS:274972:389731:252274:1:47:8:32:10 0/0:0,-12.3096,-141.967:123:PASS:275394:551246:254224:2:61:0:41:0
```
The connection type is 3to3 in this record. From what I assume, since this is an inversion, there should be a second breakpoint that is 5to5. Why isn't there a balancing record for this record or any record that i can find? Perhaps the second breakpoint is directly implied, but in that case, why does delly choose to produce both 5to5 and 3to3 connection types (their totals are in approximately the same proportion), instead of only 5to5 or only 3to3? Maybe I'm not understanding the difference of these types.
|
Only simple inversions have both types of paired-end clusters. In cancer you often find complex SVs and in that case these SV types just tell you the type of genomic adjacencies, as described [here][1]
[1]: https://github.com/tobiasrausch/wally#paired-end-view
|
biostars
|
{"uid": 9539724, "view_count": 323, "vote_count": 1}
|
I have paired-end sequencing reads (Illumina) in .fastq format (I have 2 files, so not interleaved) that I want to demultiplex.
They contain 2 samples which each have 2 barcodes, one for the forward read and one for the reverse read (so 4 barcodes in total).
A typical read looks like this:
first_part_primer - barcode - last_part_primer - rest_of_sequence
The primer sequence before the barcode can have different lengths or sometimes it is missing completely.
I have tried using [sabre](https://github.com/najoshi/sabre) with the following command:
sabre pe -f file1.fq -r file2.fq -b barcode.txt -u unmatched_file1.fq -w unmatched_file2.fq -c
where the barcode.txt is formatted as is suggested in the sabre documentation.
This works fine when the barcodes are at the beginning of the reads, but when the barcodes are somewhere in the middle of the reads, it doesn't recognize them. This seems to be the case for most demultiplexing tools I have seen.
I have also found [demultiplex](https://demultiplex.readthedocs.io/en/latest/usage.html), but this gives errors during installation.
My question is how to demultiplex the reads in this case?
Please let me know if you have any suggestions or solutions.
**Edit**
This is an example of how the reads look
> @A00153:690:HJVN5DSXY:1:1101:1470:1000 1:N:0:GAACGTGA+AACTGGTG
AGGTCAGTCACATGGTTAGGACGCAGATAGACAACGAAAACGAACGGGATAAAATATTTAACTTGCGGGACGGATTCAGCTCTCACTACGACCAGCACTACCTAAGAAGATCGGAAGAGCACACGTCTGAACTCCAGTCACGAACGTGAA
+
F:FFFF:FFFFF:F:,,F:,FF:FFFFFFFFFFFF:FFFF,FFFF,:,FFFFFFFFFFFFF,FFFFF:,F,FFFFFFFF,FFFFFFFFF,FFFF,:F,F::FF,::,F:FFF,FFFFFFF,F::FF,:,FFFFFFF:FFF,F:FF,FF::
> @A00153:690:HJVN5DSXY:1:1101:3568:1000 2:N:0:GAACGTGA+AACTGGTG
AGGTCAGTCACATGGTTAGGACGCAGCGAGTAAACGAAAACGAACGGGATAAATACGGTAATCGAAAACCGATACGATCGGCATAGAAAAGGTTGACAAGGAAATTGACGAATTGAAGCAGAAACTGGAAAACTTGGTAAAACAAGAAGC
+
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,FFFFFFFFFFFFFFFFFFFFFFF:F:FFFFFFFFFFFFFFFFFFFFFFFF:FFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFF
> @A00153:690:HJVN5DSXY:1:1101:4056:1000 2:N:0:GAACGTGA+AACTGGTG
AGGTCAGTCACATGGTTAGGACGCAGCGAGTAAACGAAAACGAACGGGATAAATACGGTAATCGAAAACCGATACGATCCGGTCGGGTTAAAGTCGAAATCGGACGGGAACCGGTATTTTTGTTCGGTAAAATCACACATGGCTACGAAC
+
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
The first example is repeated below where the italics indicate the primer sequence and the bold text the barcode sequence.
*AGGTCAGTCACATGGTTAGGACGCA **GATAGACA** ACGAAAACGAACGGGATAAA* ATATTTAACTTGCGGGACGGATTCAGCTCTCACTACGACCAGCACTACCTAAGAAGATCGGAAGAGCACACGTCTGAACTCCAGTCACGAACGTGAA
|
Inspired by the comments of both @GenoMax and @swbarnes2, I have found a solution, though possibly not the most efficient one.
It is a combination of bash codes and [```bbsplitpairs.sh```](https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/repair-guide/) (part of the bbduk package).
It uses the following steps:
1. ```grep --no-group-separator -h -A 2 -B 1 -E 'barcode1|barcode2' $file1 $file2 > $outputfile```
This searches for all lines in both fastq files that contain either of the two barcodes. When a line contains one of the barcodes, it stores that line (the actual sequence), the previous line (the header)(```-B 1```) and the next two lines (the dummy line with the ```+``` and the quality score of the read)(```-A 2```) to an output file. The reads from both input files are stored in a single output file (the output file is thus interleaved now).
2. ```cat $outputfile | paste - - - - | sort -k1,1 -S 1G | tr '\t' '\n' > $outputfilesorted```
The disadvantage of the first step is that the read pairs are not ordered. All the downstream processes expect that read pairs are stored together like this:
- readpair 1, forward read
- readpair 1, reverse read
- readpair 2, forward read
- readpair 2, reverse read
- etc.
To accomplish this, the reads are sorted based on their headers so that the reads that form a pair are subsequent reads. This is stored in another output file that should have the same size as the output file from the first step.
3. ```bash bbsplitpairs.sh -Xmx1g in=$outputfilesorted out=$outputpair outs=$outputsing fint```
After step 2 the fastq files are correctly formatted, but there are still singletons in the files (reads that do not have a paired partner because the paired partner either has a barcode that belongs to another sample or does not contain any barcodes at all). Those singleton reads have to be removed from the files in order for the downstream processes to work correctly.
For this I use the bbsplitpairs tools from the bbmap package.
These three steps are repeated for all samples that are present in the data with the appropriate barcodes.
When I check the output files, the reads indeed seem to be properly split and combined.
|
biostars
|
{"uid": 476952, "view_count": 1334, "vote_count": 1}
|
I am working with fairly low coverage GBS data (average <11 read depth), and as such am wondering if it makes sense for me to remove PCR duplicates from my data, as it seems that these are just adding extra depth. Does anyone know if there is a general rule that should be followed in this situation?
Thanks!
edit: I forgot to mention that I have paired end reads, and I'm not sure if this will change the answer to my question.
|
Actually, I think that GBS is one of the very few applications in which you can avoid removing PCR duplicates, because the space that you are sequencing is usually small enough to guarantee that you will find perfect duplicates by chance alone.
However, this depends on several properties. For example, having paired end reads and such a low coverage. you should have few duplicates. If you have a lot, then you have a problem, and your reads represent more the PCR artifacts than the distribution of the sample.
I also suggest that you refer to some of the several software packages develpoed for working on GBS data, such as STACKS: https://github.com/enormandeau/stacks_workflow
EDIT: As Eric correctly pointed out, the correct link to STACKS is this: http://catchenlab.life.illinois.edu/stacks/
I apologize for the mistake.
|
biostars
|
{"uid": 260663, "view_count": 2685, "vote_count": 1}
|
Title says it all - just looking to set a few of these up for various reasons.
For instance, I may teach a bioinformatics course coming up. I would love to have them all download their data from GEO or somewhere, and there pipeline in a docker container, and then have them independently verify (or disconfirm) a published report.
Thus, if there is a vignette that does the leg-work of identifying the experiment etc. already written that would be best, but honestly just dockerized pipelines (of good quality) would be close - I could make the vignette from there.
Thanks v much!
|
You're probably looking for something like https://nf-co.re/ for full, high-quality pipelines.
They are still not trivial to run however.
|
biostars
|
{"uid": 9512035, "view_count": 703, "vote_count": 1}
|
Hello! I hope somebody can provide some insight into TopHat and Bowtie2 behavior. I have a data set that I aligned to a reference using Bowtie2 and Tophat, both running default settings. When I used Bowtie, the alignment rates were >90% for all samples. With Tophat, however, the rates were all around 45-55%. Since Tophat uses Bowtie, I'm not sure why the resulting alignment rates are so much lower. My best guess is that when Tophat calls Bowtie, it uses different settings than the default settings for a user using Bowtie directly, but I haven't been able to figure this out from the respective manuals. Could anyone explain why this might be? I'm curious because there's apparently something about my reads that is very sensitive to differences in aligners used, and I want to know what it is.
I'm not sure what additional information is needed to answer this question but I'm ready to provide it. Thank you!
ETA: I'm using Tophat2 and Bowtie2, latest versions of both
|
Hi everyone, I have an answer to this after consulting with one of the developers. Posting in case anyone here is interested or comes across this when googling.
The difference between how Bowtie2 runs by default and how Tophat2 calls Bowtie2 is in the minimum alignment score required to consider an alignment valid. In Bowtie, this minimum is set to -0.6 - 0.6 * read length. Tophat runs Bowtie with the alignment score minimum set to a constant of -14. So Tophat has a much more stringent criterion for which alignments it considers valid. (When Hisat2 calls Bowtie2, the default minimum threshold is set to 0 - 0.2 * read length. With my samples, this resulted in alignment rates around 75% - right in between the Bowtie and Tophat results.)
The developer I communicated with was surprised that this parameter made such a drastic difference in alignment results, so it's worth paying attention to. This is likely because I'm dealing with diverse wild populations that might have significantly diverged from the reference genome.
|
biostars
|
{"uid": 366167, "view_count": 3271, "vote_count": 2}
|
Is someone familiar with the internals of CellRanger in terms of how it compares the cellular barcodes obtained from a cDNA library vs the one obtained from a feature barcode library? I am in the situation that I inherited a project where the preprocessing was done with CellRanger, outputting several thousands of "good" cells (so with matched CBs for both cDNA and feature barcode libraries), but I am completely unable to replicate this with indepdendent methods. I did a preprocessing with [Alevin](https://combine-lab.github.io/alevin-tutorial/2020/alevin-features/). This yielded literally no overlap between detected barcodes for cDNA and feature barcode libraries. I then tried aligning the CBs (the first 16bp of R1 of the feature barcode libraries) to the CBs that CellRanger returned resulting in < 1% mapping rate with bowtie2 even in lenient `--very-fast` mode. I verfied with the same strategy that both the cDNA and feature barcode CBs align with > 95% to the 10X-provided 3M-whitelist, so that is not the issue, these are 10X libraries and all other QC is good. Still, no overlap between cDNA and feature barcodes.
Any comments on this? @rob, hope you see this, is there anything fundamentally different towards how Alevin and CR treat/identify CBs?
|
Hi @atpoint ,
It's a common problem with feature barcoding pipelines, please checkout the following discussion on the salmon repo.
I'm pretty sure that'll solve the problem, if not please feel free to reach out.
https://github.com/COMBINE-lab/salmon/discussions/576
|
biostars
|
{"uid": 9506747, "view_count": 907, "vote_count": 1}
|
Hey all.
Currently working on a project to do mitochondrial variant calling on whole exome data. Our probes are about 120 bp and are setup to capture the entirety of the chrM contig at extremely high depth.
We're currently looking at a few different tools, and the new GATK best practices MUTECT2 mito pipeline that incorporates a double alignment strategy looks very promising. The thing is, the --mitochondria-mode tag is brand spanking new, and there just isn't a lot of documentation or usage examples for replicating the pipeline at the command line. I've tried contacting support a few times, but GATK support is quite understaffed at the moment.
Thinking instead that we might use their fully built TERRA cloud pipeline [(found here)][1] to generate our vcfs, but the pipeline docs indicate that the workflow is configured for full WGS bam/crams, not WES bam/crams.
People who are familiar with TERRA and variant calling... do you think it is possible to do WES on this workflow? What required inputs would need to change? I'm guessing a few of the interval lists?
Also, there are quite a few optional inputs you can use. Can anyone suggest some I might want to use besides setting a vaf_filter_threshold?
[1]: https://app.terra.bio/#workspaces/help-gatk/Mitochondria-SNPs-Indels-hg38
|
>Currently working on a project to do mitochondrial variant calling on whole exome data. Our probes are about 120 bp and are setup to capture the entirety of the chrM contig at extremely high depth.
As I understand it a typical exome kit doesn't include mtDNA probes because the kit would be overwhelmed by the number of mtdna compared to nuclear dna and any attempt to include probes can also end up amplifying NUMTs. Maybe you have a better solution or more specific probes.
> Thinking instead that we might use their fully built TERRA cloud pipeline (found here) to generate our vcfs, but the pipeline docs indicate that the workflow is configured for full WGS bam/crams, not WES bam/crams.
WGS doesn't suffer as much from the problems I mentioned above (though NUMTs are still a thing), so they just figured you would have done it this way. This first step in this TERRA WDL workflow just filters for chrM anyway. It's about as crude as you'd imagine and would behave the same for any bam file.
task SubsetBamToChrM {
input {
File input_bam
File input_bai
String contig_name
String basename = basename(basename(input_bam, ".cram"), ".bam")
File? ref_fasta
File? ref_fasta_index
File? ref_dict
File? gatk_override
# runtime
Int? preemptible_tries
}
Float ref_size = if defined(ref_fasta) then size(ref_fasta, "GB") + size(ref_fasta_index, "GB") + size(ref_dict, "GB") else 0
Int disk_size = ceil(size(input_bam, "GB") + ref_size) + 20
meta {
description: "Subsets a whole genome bam to just Mitochondria reads"
}
parameter_meta {
ref_fasta: "Reference is only required for cram input. If it is provided ref_fasta_index and ref_dict are also required."
input_bam: {
localization_optional: true
}
input_bai: {
localization_optional: true
}
}
command <<<
set -e
export GATK_LOCAL_JAR=~{default="/root/gatk.jar" gatk_override}
gatk PrintReads \
~{"-R " + ref_fasta} \
-L ~{contig_name} \
--read-filter MateOnSameContigOrNoMappedMateReadFilter \
--read-filter MateUnmappedAndUnmappedReadFilter \
-I ~{input_bam} \
-O ~{basename}.bam
>>>
runtime {
memory: "3 GB"
disks: "local-disk " + disk_size + " HDD"
docker: "us.gcr.io/broad-gatk/gatk:4.1.1.0"
preemptible: select_first([preemptible_tries, 5])
}
output {
File output_bam = "~{basename}.bam"
File output_bai = "~{basename}.bai"
}
}
|
biostars
|
{"uid": 399551, "view_count": 1890, "vote_count": 1}
|
Hi,
I have 10 fasta files (each file with 20 gene sequences from each of the 10 samples). I would like to create 20 files, specific to each gene from 10 samples.
I proceeded as follows to extract genes with the file_name in header:
pyfasta extract --header --fasta test.fasta gene_name1 | awk '/^>/ {$0=$0 "_sample1"}1' > gene_name1.fasta
Output:
>gene_name1_sample1
ATGC
I am successful in creating multiple gene fasta files for each gene from each sample (a part from loop):
pyfasta extract --header --fasta $sample.fasta gene_name1 >> gene_name1.fasta
pyfasta extract --header --fasta $sample.fasta gene_name2 >> gene_name2.fasta
But, I am unable to add file_name to the header of files in loop (but can do for 1 file as mentioned in the beginning).
Kindly guide.
Thanks.
|
The question is pretty generalized with the use of terms. From what I understand, this should work:
If each sample file are not single line fasta, [linearize][1], on commandline:
for file in *.fasta; do awk '/^>/ {printf("\n%s\n",$0);next; } { printf("%s",$0);} END {printf("\n");}' < $file > "`basename $file .fasta`_single-line.fasta"; done
Then run this python script:
#!/usr/bin/env python
import glob
from collections import defaultdict
genes = defaultdict(list)
for file in glob.glob('*_single-line.fasta'):
with open(file, 'r') as f:
for line in f:
if line.startswith(">"):
# Gene ID : (sample name, sequence)
genes[line.strip().split('>')[1]].append((file.split('.fasta')[0], next(f).strip()))
for g in genes:
# open fasta per gene name
with open(g + '.fasta', 'w') as out:
# for each sample name and sequence
for n in range(len(genes[g])):
# print to file, >gene_sample, sequence
out.write('>' + g + '_' + genes[g][n][0] + '\n' + genes[g][n][1])
[1]: https://www.biostars.org/p/9262/
|
biostars
|
{"uid": 268147, "view_count": 5654, "vote_count": 1}
|
<p>Hello folks!</p>
<p>I want to mask a genome with a particular repeat library using RepeatMasker.</p>
<p>Then I want to cross the coordinates of the repeats with those of gene annotations to find overlaps between them and study associations and stuff.</p>
<p>I'm only starting to consider feasible ways to do that so any input would be great.</p>
<p>Thanks!</p>
|
<p>The answer by Jon is a good one to get overlaps, and I can tell you how to make the GFF. In the 'util' directory of the RepeatMasker distribution there is a script called 'rmOutToGFF3.pl' that will do the conversion. The usage is pretty simple since it writes to stdout:</p>
<pre>
perl rmOutToGFF3.pl my_repeatmasker.out > my_repeatmasker.gff</pre>
|
biostars
|
{"uid": 164800, "view_count": 2600, "vote_count": 1}
|
Hi,
I am analyzing genomic (RNA-seq) data from Patient Derived Xenograft tumor samples where cancer patient tumors are transplanted and grown in a mouse, harvested, and then extracted DNA and RNA is sequenced. I have never done this before and wondering if human or mouse reference genome should be used. I would guess the human reference genome should be used for alignment but is it possible that there would be some mixed in mouse cells?
Thanks,
- Pankaj
|
Hi,
you would use human reference for alignment of course.
There are some methods to remove possible contaminated reads originating from mouse cells. You can use uniquely mapped reads with high mapping quality. There is also a [tool][1] available to separate reads especially from xenograft samples.
[1]: http://bioinformatics.oxfordjournals.org/content/28/12/i172.full
|
biostars
|
{"uid": 197768, "view_count": 3074, "vote_count": 2}
|
<p>I have chromosome start and end coordinates for a given chromosome. I need to translate these to locus - something like this <br><br>
"11q1.4-q2.1", meaning it is on the long arm of chromosome 11, somewhere in the range from sub-band 4 of band 1, and sub-band 1 of band 2.<br>from wikipedia <a href='http://en.wikipedia.org/wiki/Locus_%28genetics%29'>http://en.wikipedia.org/wiki/Locus_%28genetics%29</a></p>
|
You can download the coordinates of the cytobands from the UCSC [here](http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/cytoBand.txt.gz).
```
curl -s "http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/cytoBand.txt.gz" | gunzip -c
chr1 0 2300000 p36.33 gneg
chr1 2300000 5300000 p36.32 gpos25
chr1 5300000 7100000 p36.31 gneg
chr1 7100000 9200000 p36.23 gpos25
chr1 9200000 12600000 p36.22 gneg
chr1 12600000 16100000 p36.21 gpos50
chr1 16100000 20300000 p36.13 gneg
chr1 20300000 23800000 p36.12 gpos25
chr1 23800000 27800000 p36.11 gneg
chr1 27800000 30000000 p35.3 gpos25
(...)
```
|
biostars
|
{"uid": 4355, "view_count": 17268, "vote_count": 3}
|
Hello everyone,
I would like to have per-base coverage of a bam file of an RNAseq experiment, but only on exons.
I know how to do that on all genome with bedtools:
*bedtools genomecov -d -ibam aligned_reads.bam -g genome.fa > cov.txt*
And I have a gff file giving me the position of exons on my species.
Does anyone know a way to have the same result, but only on given exon positions?
Thanks,
Guillaume
|
From the gff extract those lines of exons only. something like below
awk<file.gff -F'\t' '$3==“exon”{print}' >exons.gff
Now use bedtools *coverage* with the above gff and your bam file with -d and other options that suits your purpose
|
biostars
|
{"uid": 227454, "view_count": 2469, "vote_count": 1}
|
Hello Biostars,
I am very new to genomics and have been given scripts from other bioinformaticians to learn from. Within these scripts they used specific .bed files to analyse a panel of genes and perform annotation.
I understand there are methods of turning a .fasta file or a .bam file into a .bed file. However, I do not understand how to extract information from specific genes e.g. how to create a .bed file which maps all the collagen genes or how to create a .bed file with only exons.
Does anyone know the process of performing such analysis, or know of any databases where .bed files may be stored?
Many thanks,
Krutik
|
After [installing BEDOPS][1], here's a way to get a BED file of genes from a central reference like Gencode:
```
wget -qO- ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_28/gencode.v28.annotation.gff3.gz \
| gunzip --stdout - \
| awk '$3 == "gene"' - \
| convert2bed -i gff --attribute-key="gene_name" - \
> genes.bed
```
Likewise, for exons:
```
wget -qO- ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_28/gencode.v28.annotation.gff3.gz \
| gunzip --stdout - \
| awk '$3 == "exon"' - \
| convert2bed -i gff --attribute-key="gene_name" - \
> exons.bed
```
Via: https://bioinformatics.stackexchange.com/questions/895/how-to-obtain-bed-file-with-coordinates-of-all-genes
(Using the `--attribute-key="gene_name"` option with `convert2bed` will bring in HGNC symbol names, which in some contexts can be more commonly used (and useful) for gene names than Ensembl IDs.)
If you want specific genes (say you have a list of gene names or symbols in a file called `genes_of_interest.txt`, you can use `grep`:
```
grep -wfF genes_of_interest.txt genes.bed > genes_of_interest.bed
```
To do mapping of genes to a set of reads of interest, you can use BEDOPS `bam2bed` to convert BAM and `bedmap` to map:
```
bam2bed < reads.bam > reads.bed
bedmap --echo --echo-map-id-uniq reads.bed genes_of_interest.bed > answer.bed
```
[1]: https://bedops.readthedocs.io/en/latest/content/installation.html
|
biostars
|
{"uid": 9504448, "view_count": 1730, "vote_count": 1}
|
I'm kind of new to this space-- a friend of mine says he uses SNPeff for all his exome annotations, and he doesn't know of any other popular tools for this purpose.
I'm annotating some human exomes and I am curious about what else is out there. A search gave me a lot of answers, but I don't know which are popular in the community. Are there gaps the SNPeff leaves that other effect predictors fill? Thank you so much for reading my post!
|
The tools I hear used most frequently are SnpEff, VEP, and Annovar. This [paper][1] (Table 1) shows a comparison of the three tools.
SnpEff tends to be robust and I personally use it the most. Remarkably, SnpEff can effectively annotate even structural variants and long indels, in addition to traditional smaller variants. I've used Annovar once or twice but strange bugs crop up here and there; however the developer of it maintains it well and offers a lot of [documentation][2]. VEP seems quite popular, but I personally have the least experience with this one.
[1]: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0974-4
[2]: http://annovar.openbioinformatics.org/en/latest/
|
biostars
|
{"uid": 263990, "view_count": 6008, "vote_count": 4}
|
How are variant callers able to compute which copy of the chrs in case of a diploid species has the heterozygous detected variants? What information do they use, just paired end reads?
I could find some statistical data in GATK webpage, but I would like to understand if there is other information used, the rationale behind it the accuracy it would have and the factors that affect this process.
Thank you very much
|
Usually the caller has no idea which chromosome homolog a variant is on. It can just see variants that are in the same read or read pair (unlikely for short reads) or it can try to infer which variants are on the same chromosome homolog (phased) using read-backed phasing (as part of the read assembly performed by the haplotype caller).
These in silico methods are spotty at best. Most people who need phasing just use a long-read technology, or they sequence the parents.
https://software.broadinstitute.org/gatk/documentation/tooldocs/3.8-0/org_broadinstitute_gatk_tools_walkers_phasing_ReadBackedPhasing.php
https://www.illumina.com/techniques/sequencing/dna-sequencing/whole-genome-sequencing/phased-sequencing.html
|
biostars
|
{"uid": 319627, "view_count": 1027, "vote_count": 2}
|
I have a dataset of 32 samples quantified using `salmon`. I used `tximport` to import the data:
txi=tximport(c(a vector of file names),type="salmon",tx2gene=tx2gene)
I have created the colData and used `DESeqDataSetFromTximport` to construct the Deseq2 data set:
dds <- DESeqDataSetFromTximport(txi = txi,colData = col_data,design = ~ Form)
In this case, all 32 samples are used, now I only want to use a subset of txi by choosing specific samples. Is there a way to subset my current txi rather than constructing a new txi using only the samples that I want?
|
The DESeq2 object is basically a SummarizedExperiment so you can subset this prior to running `DESeq` using standard operations such as `dds_use <- dds[,c(columns_to_keep]` where `columns_to_keep` is a numeric vector with the columns (samples) you want to use.
|
biostars
|
{"uid": 442188, "view_count": 1869, "vote_count": 1}
|
I'm trying to produce a simple window => coverage table from a bam file. I produced (non-sliding) windows with `bedtools makewindows` and then ran `bedtools coverage -abam alignment.bam -b reference.windows > coverage`. The problem is that I cannot make sense of the output I got. Here's a sample:
```
I 0 215 SRR1297046.176411/2 23 + 0 215 0,0,0 1 215, 0, 1 215 215 1.0000000
I 0 216 SRR1297046.2210768/1 23 + 0 216 0,0,0 1 216, 0, 1 216 216 1.0000000
I 1 222 SRR1297046.178368/1 24 + 1 222 0,0,0 1 221, 0, 1 221 221 1.0000000
```
The first field is the chromosome name and the fourth one is the read's, but what are the others? This output is inconsistent with the ones described [here][1]. How can I convert this output into a "window => coverage" table?
I'd appreciate any help.
[1]: http://bedtools.readthedocs.org/en/latest/content/tools/coverage.html?highlight=coverage
|
<p>If you want the coverage of each entry in <code>reference.windows</code> then <code>bedtools coverage -a reference.windows -b alignment.bam > coverage</code> is what you want.</p>
<p>BTW, this will probably double count overlapping paired-end reads. You could alternatively use <code>samtools depth</code> with some post-processing and not have to worry about that.</p>
|
biostars
|
{"uid": 172179, "view_count": 9738, "vote_count": 1}
|
How do I get the start and end positions of the Hemoglobin Z transcript ENSG00000130656 using a Python REST API? Is it possible? For example I want to get the start and end positions of these exons: http://www.ensembl.org/Homo_sapiens/Transcript/Exons?db=core;g=ENSG00000130656;r=16:152687-154503;t=ENST00000252951
|
I came up with the answer. I can simply access this url: http://rest.ensembl.org/overlap/id/ENSG00000130656.json?feature=exon
The documentation is quite comprehensive, once it is located: http://rest.ensembl.org/documentation/info/overlap_id
|
biostars
|
{"uid": 156125, "view_count": 1774, "vote_count": 1}
|
Are all [GO terms][1] manually curated or are they computationally curated or mixture of both?
How about GO from [MSigDB][2] ?
[1]: http://www.geneontology.org/
[2]: http://software.broadinstitute.org/gsea/msigdb/
|
As far as I know, the ontology itself is entirely curated. On the other hand the gene annotations are a mixture of curation and automatic annotations. This is tracked with [evidence codes][1].
[1]: http://www.geneontology.org/page/guide-go-evidence-codes
|
biostars
|
{"uid": 242559, "view_count": 1071, "vote_count": 3}
|
Hi all,
I have annotated my genome with the classical combo MAKER + Blast2GO. I am writing now a paper about this genome and this protein coding gene annotation.
What is it usually do in term of protein coding gene annotation upload ? Do people upload it in their github or specific database ?
Thanks for your help.
|
For the European entry to INSDC use ENA.
For the US door to INSDC use NCBI.
In both cases you must convert your annotation.
For submission to ENA you need to use [EMBLmyGFF3][1].
For submission to NCBI you need to use [GAG][2].
[1]: https://github.com/NBISweden/EMBLmyGFF3
[2]: https://github.com/genomeannotation/GAG
|
biostars
|
{"uid": 429114, "view_count": 940, "vote_count": 2}
|
Hi,
I have a file with five columns, from which I want the mean of the last four columns where the first rows of the first column are the same. For example, my file looks like this -
A 1 1 1 1
A 1 3 1 1
A 1 1 2 3
B 5 7 2 4
C 2 1 5 1
C 2 2 3 6
The desired output is -
A 1 1.7 1.3 1.7
B 5 7 2 4
C 2 1.5 4 3.5
Any help would be appreciated.
Thanks.
|
import pandas as pd
df = pd.read_csv("tmp.txt", sep="\t", header=None)
df
0 1 2 3 4
0 A 1 1 1 1
1 A 1 3 1 1
2 A 1 1 2 3
3 B 5 7 2 4
4 C 2 1 5 1
5 C 2 2 3 6
df.groupby(0).mean()
1 2 3 4
0
A 1.0 1.666667 1.333333 1.666667
B 5.0 7.000000 2.000000 4.000000
C 2.0 1.500000 4.000000 3.500000
df.groupby(0).mean().to_csv("tmp_mean.txt", sep="\t", header = None)
|
biostars
|
{"uid": 386555, "view_count": 892, "vote_count": 1}
|
I'm looking for a Python package that has all the capabilities of **METATOOL 5.0** for doing **Elementary Mode Analysis**. I found **ScrumPy** and **PyNetMet**. I'm having a little trouble installing all the dependencies for ScrumPy and before I proceed, I was wondering if it was worth it for doing what I want to do. Any help would be greatly appreciated :)
If you're not familiar with Elementary Mode Analysis, "These are the smallest sub-networks that allow a metabolic reconstruction network to function in steady state.[[28]][1][[29]][2] According to Stelling (2002),[[29]][2] elementary modes can be used to understand cellular objectives for the overall metabolic network. Furthermore, elementary mode analysis takes into account [stoichiometrics][3] and [thermodynamics][4] when evaluating whether a particular metabolic route or network is feasible and likely for a set of proteins/enzymes.[[28]][1]" from the wiki http://en.wikipedia.org/wiki/Metabolic_network_modelling#Elementary_mode_analysis.
[1]: http://en.wikipedia.org/wiki/Metabolic_network_modelling#cite_note-Schuster_2000-28
[2]: http://en.wikipedia.org/wiki/Metabolic_network_modelling#cite_note-Stelling_2002-29
[3]: http://en.wikipedia.org/wiki/Stoichiometry
[4]: http://en.wikipedia.org/wiki/Thermodynamics
|
**PySCeS** is the best option for there are regular updates and more documentation then you would ever need to read.
|
biostars
|
{"uid": 136818, "view_count": 2080, "vote_count": 1}
|
Hi
I would like to plot RNASeq data that I have downloaded from TCGA in a PCA plot. I have found some great guides on how to plot the actual data in PCA using r in ggplot2 and such but my main question is what format data should I plot?
I currently have raw counts and RSEM data. Should I input raw counts into something like edgeR or deseq2 and filter for expression by cpm first? Should I normalise it? Should I stabilise variance using rlog2? Or convert to TPM and plot that? Argh I'm so confused. Grateful for any advice you can give me :)
|
Dear Elizabeth,
In the simplest scenario (4 samples; 4 genes; 1 experimental condition), your metadata object, which you may have to read in from a file, could look like:
ID Condition
MA M
MB M
PA P
PB P
...whilst your counts file could look like:
MA MB PA PB
gene1 45 46 25 22
gene2 45 45 45 44
gene3 10 10 9 4
gene4 88 67 34 44
This could then be read into DESeq2 as:
dds <- DESeqDataSetFromMatrix(rawcounts, colData=metadata, design=~Condition)
|
biostars
|
{"uid": 272304, "view_count": 7809, "vote_count": 3}
|
Hi,
I have a set of files that I’d like to perform a function on, with the goal of applying one or more parameters in that function that include more than one possible state.
For example, I might have two samples, each with their own fasta file: sample_A and sample_B.
I want to perform a blast search for each input fasta file, but I also want to loop through a range of word sizes for every blast process for each sample. Say, three values: 11, 13, 15.
This would mean that for the sample_*.fasta input, I’d generate three blast output files, each one reflecting one of those three word size values.
I am struggling to understand how to structure the snakemake rule for input and output names, because my they don’t share the same wildcards - there is an extra name from the blast parameter output that isn’t part of the input name.
Thanks for advice on how to include a parameter name in a snakemake rule into the output name!
|
Maybe this?
```
samples = ['sample_a', 'sample_b', 'sample_c']
word_sizes = [11, 13, 15]
rule all:
input:
expand('blast/{sample}.{word_size}.out', sample= samples, word_size= word_sizes),
rule blast:
input:
fa= '{sample}.fa',
output:
out= 'blast/{sample}.{word_size}.out',
shell:
r"""
blastn -word_size {wildcards.word_size} -query {input.fa} -out {output.out} ...
"""
```
Note that the `expand()` function will create all combinations of sample and word_size and returns a list of strings. If you want more control on what combinations to have you can use any python code to create such list.
Also, this assumes fasta file are named with the `sample` prefix, if this is not the case you can use a dictionary to map samples to fasta files. In this latter case you may need to use a function as input to the `blast` rule.
|
biostars
|
{"uid": 9463762, "view_count": 1477, "vote_count": 2}
|
Hi,
I have a list of variants called from a individual genome and I'm trying to filter out the important predispositions from it. My approach was to download the `variant_summary.txt.gz` file from ClinVar website, in which most of the variants related to human health are being recorded, so that I can intersect my variants with it.
I loaded the `variant_summary.txt` into R and it says the Dataset has 154358 rows and 25 columns. But when I check with `wc-l` linux the number of records is 198661. I double checked the no of rows by visualizing the data in excel. It had 198,661. My questions are,
1. Why R does not load all the records of my file?
2. Given the fact that I'm still novice to bioinformatics do you think that my approach is feasible in finding predispositions if I fix the R issue?
Thanks you very much.
|
You can read in the file like so:
```
var.anno = read.delim("Downloads/variant_summary.txt", header=T)
dim(var.anno)
[1] 198660 25
```
That gave me the correct dimensions, first row contains the header.
Regarding your second question, I think it is reasonable to try to annotate detected variants and their association with phenotypes from available databases. Depending on what you are after, you might also want to consult other variant databases, e.g. dbSNP. dbSNP also links to ClinVar if there is [an entry there][1] and to OMIM.
There are also R packages for this purpose, some here:
- http://cran.r-project.org/web/packages/NCBI2R/index.html
- SNP-related packages in bioconductor http://www.bioconductor.org/packages/release/BiocViews.html#___SNP
[1]: http://www.ncbi.nlm.nih.gov/SNP/snp_ref.cgi?rs=4988235
|
biostars
|
{"uid": 104921, "view_count": 2475, "vote_count": 1}
|
Greeting all.
I am searching novel somatic mutations using various public software and annotation tools such as annovar.
I have been using my own cancer exome data not public data 'just real patient data' and finally I completed running calling program(MuTect and annovar) and I got to discover some genes that looks disease-like to me.
One of my genes is starting with LOC and its full name is LOC100129697 and it has variation in nonsynonymous region.
I hope that this gene are related to the disease and with this hope in mind, I ran the IGV to analyze this variation more deeply.
However, when I looked up IGV , I was surprised to see that gene name which IGV shows me is totally different from LOC100129697.
How can I interpret this gene LOC100129697?
Is there a gene which have another nick-name?
|
From NCBI:
> **Symbols beginning with LOC.** When a published symbol is not available, and orthologs have not yet been determined, Gene will provide a symbol that is constructed as 'LOC' + the GeneID. This is not retained when a replacement symbol has been identified, although queries by the LOC term are still supported. In other words, a record with the symbol LOC12345 is equivalent to GeneID = 12345. So if the symbol changes, the record can still be retrieved on the web using LOC12345 as a query, or from any file using GeneID = 12345.
|
biostars
|
{"uid": 129299, "view_count": 10340, "vote_count": 6}
|
Hi,
I would ask if someone could clarify me the concept of phased/unphased genotypes. In the first case we know the haplotype and in the second case we don't know the haplotype ?
Thanks.
|
You know the haplotypes in each case. The difference is that for phased heterozygous variants, you know which variants are on the same chromosomal copy. For example, take heterozygous variant 1 (A/C) and variant 2 (T/G). One chromosome copy will have an A at location 1, and the other will have a C. But what allele will be present at location 2 for the chromosome copy with an A at location 1? With unphased variants, you don't know - could be T or G. With phased variants, you know which one it is. Phasing is not necessarily complete; often you can only phase small regions.
|
biostars
|
{"uid": 229350, "view_count": 3296, "vote_count": 1}
|
Hi all,
As you see in the picture, I have two columns. So, I want to replace the values smaller than 0.500000 from the column A_Freq with the values in the M.F column In the same rows. What is the best idea?
for example: replace 0.312500 In the third row of the column A_Freq with 0.687500 From the same row but in the column M.F.
CHROM_POS A_Freq M.F N_Chr
1 CM009840.1_1096 0.812500 0.187500 16.25000
2 CM009840.1_1177 0.611111 0.388889 12.22222
3 CM009840.1_1276 0.312500 0.687500 6.25000
4 CM009840.1_1295 0.277778 0.722222 5.55556
5 CM009840.1_1471 0.250000 0.750000 5.00000
6 CM009840.1_1518 0.875000 0.125000 17.50000
7 CM009840.1_1527 0.222222 0.777778 4.44444
8 CM009840.1_1533 0.777778 0.222222 15.55556
9 CM009840.1_1630 0.250000 0.750000 5.00000
10 CM009840.1_1639 0.000000 1.000000 0.00000
11 CM009840.1_1711 0.500000 0.500000 10.00000
12 CM009840.1_1972 0.250000 0.750000 5.00000
13 CM009840.1_2030 0.142857 0.857143 2.85714
14 CM009840.1_2101 0.375000 0.625000 7.50000
15 CM009840.1_2690 0.687500 0.312500 13.75000
16 CM009840.1_2849 0.142857 0.857143 2.85714
17 CM009840.1_3013 0.312500 0.687500 6.25000
18 CM009840.1_3042 0.714286 0.285714 14.28572
19 CM009840.1_3062 0.250000 0.750000 5.00000
20 CM009840.1_3128 0.250000 0.750000 5.00000
Best Regard
Modtafa
|
library(tidyverse)
new_df <- mutate(df, A_Freq = ifelse(A_Freq >= 0.5, A_Freq, M.F))
|
biostars
|
{"uid": 344116, "view_count": 25799, "vote_count": 2}
|
Dear Colleagues,
I would like to do some visualization of my network in Cytoscape. My file contains genes and their expression values in normal and tumor samples. The visualization of interest would be the same network colored according to A) expression in normal samples and B) expression in tumor samples.
Honestly, I completely do not know how should be prepared the input file with such annotation. Basic input file to create network in Cytoscape looks like:
Gene A - Gene B
(and some additional annotations).
Some example of such analysis shown in Cytoscape tutorial indicates that such input file should look like this:
GENE COMMON gal1RG gal1RG gal80R
YHR051W COX6 -0.034 -0.034 -0.304
YHR124W NDT80 -0.090 -0.090 -0.348
YKL181W PRS1 -0.167 -0.167 0.112
YGR072W UPF3 0.245 0.245 0.787
(unfortunately, it is not described how the values were obtained).
The above arises in my question: what these values are related to?
If in my file I will put expression values for genes only in first columns, then I won't be able to color genes from second column. Moreover, in Cytoscape I can use only one column as such annotation, so there is no chance to provide two columns with expression values for both nodes (start and end).
I would appreciate any suggestion how to solve such issue. Maybe there is some mathematical solution how to calculate "something" that I could possibly use as representation of expression or maybe there is some other software that would be able to do something like this?
Thanks in advance.
Best regards!
|
**INPUT DATA**
You need two input files to color your network according to expression values in normal and cancer samples.
First file defines interactions between genes. For example, Excel file (`interactions.xls`):
Source Target
Gene1 Gene2
Gene1 Gene3
Gene2 Gene3
Gene2 Gene4
Gene3 Gene4
Gene3 Gene4
Gene4 Gene5
Second file lists gene names with their expression values in normal and cancer samples. For example, Excel file (`expressions.xls`):
Gene_name Normal Cancer
Gene1 1.92 1.56
Gene2 0.56 2.02
Gene3 0.34 0.01
Gene4 0.98 0.98
Gene5 0.02 0.87
**CYTOSCAPE**
1. Open Cytoscape.
2. `File` > `Import` > `Network` > `From file`
3. Load the `interactions.xls` file. Define first column as *Source*, and second as *Target*.
4. Load the `expressions.xls` file.
- `File` > `Import` > `Table` > `File`.
- Label first column as *Gene_name*, second as *Normal* and third as *Cancer*.
5. Color your network according to expression values in the *Normal* column.
- `Style` > `Fill color` > `Column`: Normal
- `Mapping Type`: Continuous
- `Current mapping`: choose your colors.
**OUTPUT**
![enter image description here][1]
Pozdrawiam:)
[1]: http://i66.tinypic.com/2dv770k.png
|
biostars
|
{"uid": 296298, "view_count": 7665, "vote_count": 3}
|
Dear all,
I have a molecule in PyMol which I want to open in ChimeraX. But after exporting the molecule as .pdb, some structural information seems to be lost - regions that in the PyMol session file are embedded into helices are now loops (this regards to several residues only). If I open the .pdb in PyMol, it looks the same as in chimeraX. I have no idea what I keep doing wrong, any hints? The first 14 residues of the original protein (from PDB, corresponding to Histag) were removed with a custom PyMol script which changed residues enumeration so that it properly starts with 1 - that is why the molecule from PyMol is needed. I would be grateful for the help.
best,
maciek
|
For anyone who might be interested:
1 - open original .pdb in text editor
2 - remove all atoms (lines starting with ATOMS) corresponding to unwanted residues, save file
3 - go to https://wenmr.science.uu.nl/pdbtools/ and renumber your sequence (saved file) as desired
4 - go back to your original structure file and find all lines starting with HELIX or SHEET and copy them to the beginning of your corrected file (just before the first residue) and apply corrections to the secondary elements ranges (for example, I deleted 14 residues from the beginning, so need to shift back secondary elements ranges by 14)
5 - save the file and check if the structure adheres to what you expect
For me, it worked perfectly.
|
biostars
|
{"uid": 9484236, "view_count": 1547, "vote_count": 2}
|
Hi,
I've a bigwig containing a chip-seq signal for a histone mark (H3K4me1) and I want to produce a bed file containing the peaks (the same way MACS produced a bigwig and a peak bed file)? How can I do that? If it's impossible I can realign the raw data and do the whole peak calling step but I prefer to avoid that.
Thanks
|
MACS2 can call peak from bedgraph, you can convert bigwig to bedgraph first by bigwigtobedgraph from [here][1] and then call peak with [MACS2][2].
> bdgpeakcall: Call peaks from bedGraph output.
[1]: http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/
[2]: https://github.com/taoliu/MACS/
|
biostars
|
{"uid": 106094, "view_count": 19492, "vote_count": 3}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.