This notebook is unfinished and still under development
a simplified environment for processing 1D Bruker NMR datasets with SPIKE
.
Run each python cell in sequence by using the ⇥Run button above (or typing shift Enter).
Cells are meant to be used in order, taking you to the complete analysis, but you can go back at any time.
The SPIKE code used for processing is visible in the cells, and can be used as a minimal tutorial.
Remark to use this program, you should have installed the following packages:
spike
( version 0.99.9 minimum )ipywidgets
( tested with version 7.1 )ipyml
( adds interactivity in the notebook )the following cell should be run only once, at the beginning of the processing
# load all python and interactive tools
from __future__ import print_function, division
from IPython.display import display, HTML, Markdown, Image
display(Markdown('## STARTING Environment...'))
%matplotlib widget
import os.path as op
import spike
from spike.File.BrukerNMR import Import_1D
from spike.Interactive import INTER as I
from spike.Interactive.ipyfilechooser import FileChooser
display(Markdown('## ...program is Ready'))
from importlib import reload # the two following lines are debugging help
reload(I) # and can be removed safely when in production
I.hidecode()
The FileChooser()
tool creates a dialog box which allows to choose a file on your disk
Select
buttonpath
argument, to start the exploration on a given locationFC.selected
FC = FileChooser(path='/DATA/',filename='fid')
display(FC)
This is simply done with the Import_1D()
tool, which returns a SPIKE
object.
We store the dataset into a variable, typing the variable name shows a summary of the dataset.
print('Reading file ',FC.selected)
d1 = Import_1D(FC.selected)
d1.filename = FC.selected
d1.set_unit('sec').display(title=FC.nmrname+" fid")
In the current set-up, the figure can be explored (zoom, shift, resize, etc) with the jupyter tools displayed below the dataset.
The figure can also be saved as a png
graphic file.
For more interactivity - see below.
We are going to use a basic processing set-up, check the documentation for advanced processing
D1 = d1.copy() # copy the imported data-set to another object for processing
D1.apod_em(0.3).zf(4).ft_sim().bk_corr().apmin() # chaining apodisation - zerofill - FT - Bruker correction - autophase
D1.set_unit('ppm').display(title=FC.nmrname) # chain set to ppm unit - and display
Following steps are optional
If is is required use the interactive phaser
Use scale
and zoom
to tune the display; then use P0, P1, pivot
to optimize the phase.
Once finished, click on Apply correction
reload(I)
I.Phaser1D(D1, reverse_scroll=True);
A simple interactive baseline correction tool
reload(I)
I.baseline1D(D1, reverse_scroll=True);
reload(I)
ph = I.NMRPeaker1D(D1, reverse_scroll=True);
Integration zones are computed from the peaks detected with the Peak-Picker above required
reload(I)
I.NMRIntegrate(D1);
Convenient to set-up your own figure
reload(I)
s = I.Show1Dplus(D1, title=FC.nmrname, reverse_scroll=True);
either as stand alone native SPIKE files, (there are other formats)
D1.save('example1.gs1')
or as a csv
text file, - in which case, it is probably better to remove the imaginary part, not useful there.
The file contains some basic informations in addition to the spectral data
D1.copy().real().save_csv('example.csv')
D1.pk2pandas().to_csv('peaklist.csv')
D1.integrals.to_pandas().to_csv('integrals.csv')
# adapt the parameters below
Zoom = (0.5,8) # zone to bucket - (start, end) in ppm
BucketSize = 0.04 # width of the buckets - in ppm
Output = 'screen' # 'screen' or 'file' determines output
BucketFileName = 'bucket.csv' # the filename if Output (above) is 'file' - don't forget the .csv extension.
# the following cell executes the bucketing
if Output == 'file':
with open(BucketFileName,'w') as F:
D1.bucket1d(zoom=Zoom, bsize=BucketSize, pp=True, file=F)
print('buckets written to %s\n'%op.realpath(BucketFileName))
else:
D1.bucket1d(zoom=Zoom, bsize=BucketSize, pp=True);
Tools in this page is under intensive development - things are going to change rapidly.