Difference between revisions of "Part:BBa K4712014"

 
Line 131: Line 131:
 
</table>
 
</table>
 
https://static.igem.wiki/teams/4712/wiki/result/fig1b.png
 
https://static.igem.wiki/teams/4712/wiki/result/fig1b.png
 +
 +
https://static.igem.wiki/teams/4712/wiki/engineering/engineering/fig1.png
 +
 +
In the initial phase, we trained the model with a limited dataset of 74 data points, followed by an evaluation that proved that the support vector machine model fits into the evaluation.
 +
 +
Subsequently, during the second round, our focus shifted to fine-tuning the SVM model parameters. Notably, this round of training also relied on the same 74 data points. It is pertinent to mention that both the first and second rounds of training and evaluation employed randomly partitioned datasets.
 +
 +
In the third round, our model made predictions for the performance of 40 entirely new primers, which were unlabeled. The practical utility of these predictions was validated through experimental verification, confirming the high accuracy and reliability of our model. The alignment between the predicted class and the experimental class was particularly strong.
 +
 +
This approach effectively underscores the potential of machine learning to reduce the number of experiments required.

Latest revision as of 13:23, 12 October 2023


H1N1-F8

The primers were designed using NCBI BLAST and SanpGene to achieve efficient and specific amplification. This primer in RPA serves as the initial binding point for the amplification process, ensuring the specificity of the reaction by targeting the desired Influenza A virus(2009H1N1) DNA or RNA sequences. The primers provided data for mathematical modeling for further primer design.

Sequence and Features


Assembly Compatibility:
  • 10
    COMPATIBLE WITH RFC[10]
  • 12
    COMPATIBLE WITH RFC[12]
  • 21
    COMPATIBLE WITH RFC[21]
  • 23
    COMPATIBLE WITH RFC[23]
  • 25
    COMPATIBLE WITH RFC[25]
  • 1000
    COMPATIBLE WITH RFC[1000]


Apparatus: Thermal Cycler, Centrifuge, Fluorescence Quantitative PCR Instrument (Ya Rui), 2mL reaction tube, Pipette and Pipette Tip Materials: 1. DNA Isothermal Amplification Kit (EZ-Life Biotechnology) 2. ddH2O 3. Thermostatic amplification specific primers:

 INFB-F1、 INFB-R1
 INFB-F2、 INFB-R2
 INFB-F3、 INFB-R3
 INFB-F4、 INFB-R4
 INFB-F5、 INFB-R5
 H1N1-F6、H1N1-R6
 H1N1-F7、H1N1-R7
 H1N1-F8、H1N1-R8

4. DL500 marker 5. Test sample: DNA Template(pUC57-M1)Concentration:1000cps/μL Storage: -20°C Methods: 1. For a 20μL reaction system:

<thead> </thead> <tbody> </tbody>
Reagent Stock Concentration Volume Added(μL)
Forward Primer 10μM 1
Reverse Primer 10μM 1
Rehydration Buffer (2X) 10
DNA Template 10nM/L 2
H2O To 18
Starter (10X) 2

2. Gently tap multiple times to mix the ingredient, and then centrifuge briefly (mix gently, avoiding vigorous shaking or vortexing). 3. Incubate at 37 ℃ for 20 minutes. 4. After heating at 65 ℃ for 10 minutes, the adhesive will run off.

<thead> </thead> <tbody> </tbody>
Electrophoresis swimlane Content
1 DL500 maker
2 INFB-F1、 INFB-R1
3 INFB-F2、 INFB-R2
4 INFB-F3、 INFB-R3
5 INFB-F4、 INFB-R4
6 INFB-F6、 INFB-R6
7 INFB-F5、 INFB-R5
8 H1N1-F6、H1N1-R6
9 H1N1-F7、H1N1-R7
10 H1N1-F8、H1N1-R8

fig1b.png

fig1.png

In the initial phase, we trained the model with a limited dataset of 74 data points, followed by an evaluation that proved that the support vector machine model fits into the evaluation.

Subsequently, during the second round, our focus shifted to fine-tuning the SVM model parameters. Notably, this round of training also relied on the same 74 data points. It is pertinent to mention that both the first and second rounds of training and evaluation employed randomly partitioned datasets.

In the third round, our model made predictions for the performance of 40 entirely new primers, which were unlabeled. The practical utility of these predictions was validated through experimental verification, confirming the high accuracy and reliability of our model. The alignment between the predicted class and the experimental class was particularly strong.

This approach effectively underscores the potential of machine learning to reduce the number of experiments required.