-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Adding PIPSeq barcode detection #366
Comments
Dear Mahmoud, Thanks a lot for getting in touch with the request to implement PIP-seq! Seems like an interesting solution to omit the microfluidics of 10x or similar. All the best, |
Is your feature request related to a problem? Please describe.
No. There is a new company, Fluent Biosciences and their sc product is PIPSeq. This tech (imo) is promising in that it doesnt require a machine like 10x as an upfront cost. Instead can be done using tubes, a centrifuge, a vorted and thermocycler. So it has potential as a starting point for many experiments without needing the expensive costs of other machines like 10x.
Describe the solution you'd like
It would be great if zUMI's can be updated to process PIPSeq and their unique barcode structure.
Describe alternatives you've considered
The PIPSeq tech uses a different barcode structure than previously processed with zUMI's. It is complicated in that there are between 0-3bp at the start that are used for phasing of the remainder of the barcode/umi structure. So if you look at the attached documentation, you will see that the "barcode/umi" on read 1 has the following structure, 0-3 (random) bp for phasing, then Tier 1 = 8bp which are from the list provided, 3 (random) bp linker, Tier 2 = 6bp, 3 (random) bp linker, Tier 3 = 6bp, 5 (random) bp linker, Tier 4 = 8bp, then the UMI = 12bp.
Next, looking at the pipseq User Guide, you can see on page 35, how R1 and R2 are laid out when sequenced. According to Fluent, for R1, need to allow at least 54bp to include the phasing bp
So with all that said, is it possible for zUMI's to process said datasets in the near future?
The text was updated successfully, but these errors were encountered: