Difference between revisions of "Part:BBa K4815000:Design"
(→Design Notes) |
(→Source) |
||
Line 16: | Line 16: | ||
We set out a large-scale search for raw data that can be used to train AI, and finally, we found a dataset published by a Nature article, a total of 30 million sets of core promoter sequences and expression data, the format is shown in the following figure, the randomly synthesized core promoter sequences with their expression rate represented by relative fluorescence intensity (which is described in detail in wet lab cycle in page engineering success) through high-throughput technique. The total data scale is large enough to cover all possibilities of any interaction between the 80bp core promoter and the transcription factors. | We set out a large-scale search for raw data that can be used to train AI, and finally, we found a dataset published by a Nature article, a total of 30 million sets of core promoter sequences and expression data, the format is shown in the following figure, the randomly synthesized core promoter sequences with their expression rate represented by relative fluorescence intensity (which is described in detail in wet lab cycle in page engineering success) through high-throughput technique. The total data scale is large enough to cover all possibilities of any interaction between the 80bp core promoter and the transcription factors. | ||
We further generate sub dataset of the total data with various sample sizes to train Pymaker, and we use our best perform one to generate PYPH1. | We further generate sub dataset of the total data with various sample sizes to train Pymaker, and we use our best perform one to generate PYPH1. | ||
+ | </p ><div style="text-align: center;">< img src="https://static.igem.wiki/teams/4815/wiki/devider.png"></div><p > | ||
===References=== | ===References=== |
Revision as of 18:07, 10 October 2023
PYPH1 -> Pymaker generated yeast promoter High 1
- 10COMPATIBLE WITH RFC[10]
- 12INCOMPATIBLE WITH RFC[12]Illegal NheI site found at 1
- 21INCOMPATIBLE WITH RFC[21]Illegal BamHI site found at 198
- 23COMPATIBLE WITH RFC[23]
- 25COMPATIBLE WITH RFC[25]
- 1000INCOMPATIBLE WITH RFC[1000]Illegal BsaI site found at 78
Design Notes
Length: the functional part of PYHP1 is the core promoter, and we decide the length of 80bp mainly because an 80-bp window is short enough that a bound nucleosome would likely cover the entire region, simplifying modeling of accessibility, and because the entire region could be sequenced with a 150-cycle kit, with overlap in the middle, which is helpful for sequencing and validating the promoter sequence.
Loci: we seat PYPH1 at approximately -170 to -90 upstream to the codon,owing to the consideration that it is the presumed transcription start site-TSS, where most transcription factors binding sites lie. We hope that modifying this part will be the most efficient way to change the expression rate. In other words, expression rate is more sensitive to changes made in this 80bp sequence than other parts.
Composition: Still, after deciding the core promoter sequence we need a scaffold that can link it to the condom and to make up a complete promoter. We get our ‘pA-pT‘ scaffold from one previous research. It can link the core promoter with the codon and provide restriction sites of BamH I and Xho I which make it possible for the plasmids with the scaffold to be inserted by various core promoter sequences at ease.
Source
We set out a large-scale search for raw data that can be used to train AI, and finally, we found a dataset published by a Nature article, a total of 30 million sets of core promoter sequences and expression data, the format is shown in the following figure, the randomly synthesized core promoter sequences with their expression rate represented by relative fluorescence intensity (which is described in detail in wet lab cycle in page engineering success) through high-throughput technique. The total data scale is large enough to cover all possibilities of any interaction between the 80bp core promoter and the transcription factors. We further generate sub dataset of the total data with various sample sizes to train Pymaker, and we use our best perform one to generate PYPH1.
</p >===References===