Methods: In a first step, experimental measurements were performed for proton and carbon ion ener- gies in the available energy ranges. Data included depth dose profiles measured in water and spot sizes in air at various isocenter distances. Using an automated regularization-based optimization pro- cess (AUTO-BEAM), GATE /Geant4 beam models of the respective beam lines were generated. These were obtained sequentially by using least square weighting functions with and without regular- ization, to iteratively tune the beam parameters energy, energy spread, beam sigma, divergence, and emittance until a user-defined agreement was reached. Based on the parameter tuning for a set of energies, a beam model was semi-automatically generated. The resulting beam models were validated for all centers comparing to independent measurements of laterally integrated depth dose curves and spot sizes in air. For one representative center, three-dimensional dose cubes were measured and compared to simulations. The method was applied on one research as well as four different clinical beam lines for proton and carbon ions of three different particletherapy centers using synchrotron or cyclotron accelerator systems: (a) MedAustron ion therapy center, (b) University Proton Therapy Dresden, and (c) Center Antoine Lacassagne Nice.
MR-guided brachytherapy (BT) has been established over the last two decades and has since been applied in a variety of clinical settings, including open low-field strength scan- ner environments –. The advantage of low field strength scanners, apart from their cost-effectiveness due to the lack of He-cooling and reduced spatial requirements in a small radiotherapy department, is relaxed safety concerns both regarding projectiles and implants. Furthermore, artefacts are generally reduced compared to higher magnetic field scanners. Since the low field strength also leads to reduced geometric distortion, such a scanner is a good can- didate for MR-only external radiation therapy, both for photon and particletherapy , , . The main drawback of an open MR unit is the smaller maximum field of view (FOV) compared to high field, closed bore scanners –. Distortions in open MR systems are often pronounced at a distance from the isocenter due to main field inhomogeneities which lim- its the FOV. This could for some patients lead to a truncated outer body contour. Although this does not affect the usability for brachytherapy, it poses an obstacle for external beam treatment (EBRT). Other than in MR-only BT treatment planning, MR-only EBRT-planning requires the inclusion of the full patient outline in the MR image that is used to create a syn- thetic CT (sCT) for dose calculation. If this challenge is overcome, sCTs can be generated by commercial software with atlas-based systems ,  or research algorithms with convolu- tional neural networks which both showed clinically acceptable dosimetric similarity to real CT images, in multiple studies, , , . Furthermore, most published work investigating sCT conversion used MRI data that was acquired with a field strength ≥1T , –, the transfer of the obtained results to scanners with lower field strength is not straight forward and therefore requires investigations.
The main goal of this PhD thesis was to develop a dose calculation platform for proton beams in magnetic fields, to be used for MRgPT. A custom PBA and beam models were proposed, following the general structure as present in clinical TPS, but accounting for the magnetic field effects on therapeutic beams. Even though PBA have limitations on dose calculation accuracy, they still represent the workhorse for treatment planning in particletherapy. Although dedicated MC based engines are starting to appear in clinical practice providing more accurate dose calculation models, the use of PBA for treatment plan optimization is expected to be further used for a faster dose calculation platform. For treatment planning, beam models are commonly created from reference data of depth dose distributions and spot widths and afterwards validated to ensure the accuracy of further dose calculations. A very common procedure is to generate input data by means of experimental measurements or MC simulations. Due to the lack of experimental facilities at the initial phase of this work, and the scare measuring time, MC simulations using the GATE/Geant4 toolkit were chosen as the gold standard method to generate calibration and validation data for the dose calculation algorithm.
For some years the interest for this technique decreased, but in the recent years the activity around this topic has renewed and increased thanks to improvements in ultra- sound imaging as well as in ion irradiation techniques. Different groups around the world are presently carrying out simulation and experimental studies on the applicabilty of the ionoacoustic technique with a pencil proton beam, as well as with heavier ions using a water phantom. For example at the Medical Physics department of LMU Munich ([50, 54, 55]), and in the USA at the University of Pennsylvania , the University of Milwaukee  and the University of Stanford  such studies are conducted. This approach promises a cost-effective and direct way to characterize the dose distribution in particletherapy. The correlation between the ultrasound images of the irradiated region and the ionoacoustic signal is received quasi in real-time. This has been recently demonstrated in various phan- toms and ex-vivo targets at 20 MeV and 50 MeV proton energy ( and ). In , data from ionoacoustic range measurements in water at proton energies between 145 MeV and 227 MeV, using a clinical synchrocyclotron (by its acceleration principle delivering an intense and short-pulsed proton beam with a width below 10 µs and 1 kHz repetition rate, optimally suited for ionoacoustics), are also presented. The approach looks promising and less complex compared to other techniques (that will be described in the next sections), but its applicability to heterogeneous tissue has to be further investigated.
In Charged ParticleTherapy (PT) proton or 12 C beams are used to treat deep-seated solid tumors exploiting the advantageous characteristics of charged particles energy deposition in matter. For such projectiles, the maximum of the dose is released at the end of the beam range, in the Bragg peak region, where the tumour is located. However, the nuclear interactions of the beam nuclei with the patient tissues can induce the fragmentation of projectiles and/or target nuclei and needs to be carefully taken into account when planning the treatment. In proton treatments, the target fragmentation produces low energy, short range fragments along all the beam path, that deposit a non-negligible dose especially in the ﬁrst crossed tissues. On the other hand, in treatments performed using 12 C, or other ( 4 He or 16 O) ions of interest, the main concern is related to the production of long range fragments that can release their dose in the healthy tissues beyond the Bragg peak. Understanding nuclear fragmentation processes is of interest also for radiation protection in human space ﬂight applications, in view of deep space missions. In particular 4 He and high-energy charged particles, mainly 12 C, 16 O, 28 Si and 56 Fe, provide the main source of absorbed dose in astronauts outside the atmosphere. The nuclear fragmentation properties of the materials used to build the spacecrafts need to be known with high accuracy in order to optimise the shielding against the space radiation. The study of the impact of these processes, which is of interest both for PT and space radioprotection applications, suffers at present from the limited experimental precision achieved on the relevant nuclear cross sections that compromise the reliability of the available computational models. The FOOT (FragmentatiOn Of Target) collaboration, composed of researchers from France, Germany, Italy and Japan, designed an experiment to study these nuclear processes and measure the corresponding fragmentation cross sections. In this work we discuss the physics motivations of FOOT, describing in detail the present detector design and the expected performances, coming from the optimization studies based on accurate FLUKA MC simulations and preliminary beam test results. The measurements planned will be also presented.
Angular distributions for elastic scattering are obtained from di fferential cross section data in the SAID data base [SAI]. For all other processes angular distributions are calcu- lated in analogy to the ultrarelativistic quantum molecular dynamics model (UrQMD). This is a microscopic transport approach where all hadrons are propagated on classical trajectories and reactions are modelled by way of stochastic binary scatterings of two Fermi gases di ffusing into each other, according to the collision term of the Boltzmann- Uehling-Uhlenbeck equation. Phenomenological potentials like Yukawa, Coulomb, and Pauli potentials are used for the calculations at lower energies, which represent the range relevant to particletherapy. At energies higher than √ s > 6 GeV the quark degrees of freedom are taken into account. In contrast to the UrQMD model, collisions between participants are not taken into account by the binary cascade so that the algorithm can only be applied for small particle densities. This implies that the nucleus is treated like a gas rather than a dense fluid, the latter more commonly being treated in molecular dynamics.
Energetic, charged particles elicit an orchestrated DNA damage response (DDR) during their traversal through healthy tissues and tumors. Complex DNA damage formation, after exposure to high linear energy transfer (LET) charged particles, results in DNA repair foci formation, which begins within seconds. More protein modifications occur after high-LET, compared with low-LET, irradiation. Charged-particle exposure activates several transcription factors that are cytoprotective or cytodestructive, or that upregulate cytokine and chemokine expression, and are involved in bystander signaling. Molecular signaling for a survival or death decision in different tumor types and healthy tissues should be studied as prerequisite for shaping sensitizing and protective strategies. Long-term signaling and gene expression changes were found in various tissues of animals exposed to charged particles, and elucidation of their role in chronic and late effects of charged-particletherapy will help to develop effective preventive measures.
5.1.2. Radiation Therapy and Hadron Therapy
Soon after the discovery of X-rays by Röntgen in 1895  it was anticipated that radiation could be used to treat cancer . About 50% of cancer patients in Eu- rope have indications to receive radiation therapy . Radiation therapy can be either applied by radioactive sources or by accelerated particles. The conventional accelerator based radiation therapy accelerates electrons which are typically con- verted to photons. These machines are commercially available, can be built very compact and t into a single room in a hospital. Worldwide more than 11,000 lin- ear accelerators (LINACs) and more than 2,000 radionuclide teletherapy units are available. In Germany more than 500 LINACs and 20 radionuclide units exist (1) . Another choice of particles for accelerator based radiation therapy are hadrons like protons and carbon ions. The clear inherent advantage of hadron therapy over con- ventional photon therapy is the energy deposition of the particles with increasing penetration depth: photons show an exponentially decreasing energy deposition af- ter a quick build-up at the entrance. On the contrary, hadrons produce a Bragg Curve (see section 2.2.1) with a clear maximum at a certain penetration depth. The position of the maximum, the Bragg peak, can be adjusted by tuning the energy of the primary beam. Figure 5.1 compares the energy deposition of photons and hadrons (in this case protons) versus depth. The important quantity for the bio- logical eect is not only the physical dose but also the so-called relative biological eectiveness (RBE). It is dened as the ratio of the absorbed dose of a test radiation (in this case protons or carbon ions) to the reference radiation (X-rays) producing the same biological eect [134, p. 77]. While protons show a similar relative biologi- cal eectiveness (RBE) as photons, carbon ions have an enhanced RBE in the Bragg peak , supporting the sparing of healthy tissue in the entrance channel. The best choice of treatment depends on the type of tumour. Hadron therapy is for example often a good choice for paediatric tumours since the dose absorbed in healthy tissue is minimised, avoiding long-term eects by radiation-induced secondary tumours which is most important when treating children .
The inclusion of the inhomogeneous RiFi mass distribution in the beam path can lead to similar inhomogeneities in the particle fluence distribution after the RiFi. Up to a certain distance from the RiFi (<60 cm for carbon ions and <17 cm for protons) and for specific beam settings, these inhomogeneities can be seen at the target surface (Ringbæk et al 2014). Additionally, since the particles are traveling through a variable amount of material depending on where they hit the RiFi, their range is changed accordingly, which in turn can lead to observed dose range inhomogeneities within the patient at the end of the particle ranges for carbon ion beams at RiFi-to-patient distances less than 20 cm with initial beam energies smaller than 200 MeV/u (Ringbæk et al 2014). Even though these are not relevant issues for isocentric treatment distances at most facilities, certain special situations can occur where inhomogeneities are observed clinically, such as when the pencil beam width at the RiFi plane is small compared to the RiFi structure period, e.g. when the beam is focused near the RiFi plane by the ion optics as will be shown in this work (section 2.3). Also, since one might want to treat the patient closer to the exit nozzle in order to reduce the lateral beam width, it is important to investigate RiFi-induced fluence inhomogeneities and how far from the patient the RiFi needs to be placed.
We showed numerically that there is a quantity that gives information about the suc- cess of the filter, the standard deviation of the different densities in the weight exponent. We further derived an analytical estimate for these quantities which showed to be roughly in accord with numerical values, and which also indicate a maximum nudging parameter above which the filter is not expected to succeed anymore. Hence, the weight exponents give information about the potential success of resampling, on which particle filtering is based. We also showed that it matters how nudging is performed, and in particular that a diagonal nudging matrix led to better results, for the assimilation of all dynamical values, of the filter. And finally, these experiments showed that the less information is given in terms of observations and of assimilated variables, the more the filter is guaranteed to ’suc- ceed’ compared to a simple analogueous method of ’only-nudging’ the ensemble of particles. All in all considered, the efficient particle filter seems to be a method which, as it essentially is ruled by the weights, is very sensitive to the observation and model error densities and hence, to the parameters that are involved in the weights, such as nudging, model and observation error covariances, length of assimilation cycles. We presume it is also sensitive to physical parameters like viscosity. In the future, it might be interesting to test the particle filter on even more realistic test model, and to investigate whether it is possible, based on the knowledge of the error densities, to predict a priori the range of nudging parameters under which the filter can be expected to work. But prior to such tests, further comparisons with other data assimilation methods as EnKF and LETKF seem appropriate, as well as studying whether nonlinear flitering methods are really required. For this, measuring the non-Gaussianity as suggested in  and , as well as exploring other methods of measuring the success of the filter using rank histograms  or scatter plots  give hope for more understanding of the matter.
The construction of spin systems on graphs are highly natural since they emerge in numerous research fields. As we described in the introduction of this thesis graphs can be very useful since we are frequently confronted with ecological networks, disease networks, neural networks and society. Math- ematically, graphs describe a set of interacting particles in a best possible generalization. So, such systems can be constructed in a very complex way, which will be studied in detail in the following subsections. As a matter of fact, the knowledge about the geometry of the system is crucial. Since the interplay between probabilistic models, on the one hand, and geometry of the graph on the other hand, is very strong, small changes in the geometry lead to new physical properties of the system. Therefore, the study of statistical models on graphs requires the introduction of new techniques and concepts with respect to the well-known case of lattices. For detailed explanations on interacting particle systems we also refer to [Lig 1985], [Ge 1988], [Bo 1998], [Wo 2000], [AlBa 2002], [BuCa 2005].
The quorum-sensing model. We studied the collective behavior of a stochastic many- particle model of quorum sensing, in which cells produce autoinducers to different degrees and secrete those into a well-mixed environment. Autoinducers become thereby shared amongst all individuals. On the one hand, production of autoinducer molecules and accompanied gene expression are assumed to be metabolically costly such that non-producers reproduce faster than producing cells. On the other hand, we assume that cells can up-regulate their autoinducer production by a sense-and- response mechanism through quorum sensing. That is, individuals can increase their production in response to the sensed average production level in the population. The central feature of the quorum-sensing model is that individuals shape their envi- ronment (through the production of autoinducers) and respond to this self-shaped environment (by changing the individual production of autoinducers), in turn. Thus, ecological and population dynamics are coupled in the quorum-sensing model. Phenotypic heterogeneity in the autoinducer production through the coupling of ecologi- cal and population dynamics in quorum-sensing microbial populations. We found that the coupling between ecological and population dynamics through quorum sensing can control a heterogeneous production of autoinducers in microbial populations. The population may split into two subpopulations: one with a low, and a second with a high production of autoinducers. This phenotypic heterogeneity in the autoinducer production is stable for many generations. At the same time, the overall autoinducer level in the environment is robustly self-regulated by how cellular production is up-regulated. Thus, further quorum-sensing functions such as virulence or biolumi- nescence can be triggered in the population. If cellular response to the environment is absent or too frequent, phase transitions occur from heterogeneous to homogeneous populations in which all individuals produce autoinducers to the same degree. Our results show that, if microbes sense and respond to their self-shaped environment, the population may not only respond as a homogeneous collective as is typically associated with quorum sensing, but may also become a robustly controlled collective of two different subpopulations.
The test particle approach surely is not as accurate as it would be preferable, but it is reasonable to assume that the feedback of the non-thermal synchrotron particles on the magnetic field is negligible as the magnetic and dynamic evolution of PWNe is dominated by the thermal plasma components. Several other simulations have shown that a thermal lepton and ion component is naturally produced by the termination shock (Gallant et al., 1992; Hoshino et al., 1992; Arons & Tavani, 1994). A self-consistent approach like in Particle-in-Cell Simulations would be much more desirable, however this imposes a great constraint on the size of the computational domain when performed in full 3D. The high Lorentz factors demand the use of very small simulation cells in order to cover the high energy radiation accurately, which enormously increases the computational effort. Hence, there is no way by current available computational power to simulate the presented turbulent problem in a fully self-consistent manner. This makes our hybrid approach a valuable tool to analyse large scale structures.
of the static correlation of Sanjeevi and Padding  parti- cle configurations with an even distribution of attack angles were used. In a dynamic system, however, the particle orientations can generally not be assumed to be evenly dis- tributed. As the Reynolds number increases, particles are increasingly oriented perpendicular to the mean flow veloc- ity. This observation is consistent with the results of the un- resolved DEM-CFD fluidization simulations of Mema et al. , who found that spherocylinders are preferably aligned horizontally in the upper region of the vertical fluidization column, where particles are more mobile and no longer densely packed. This flow situation is similar to the flow sit- uation in the simulations of the present study. Since these DEM-CFD simulations also considered gas-solid fluidiza- tion, the Reynolds numbers can be expected outside the Stokes regime and in the range of Re = 1000, where also an orientation perpendicular to the mean flow direction is obtained. The frequent occurrence of particles with attack angles between 60 and 90 supports the idea that lift forces are not strongly correlated with the angle of attack (relative to the mean flow velocity) in dynamic gas-solid flows,
Tobacco misuse is the single leading preventable cause of death worldwide. The World Health Organization (WHO) estimates that there are 1.3 billion smokers worldwide and nearly 5 million tobacco-related deaths each year. Despite widespread knowledge of tobacco’s dangerous health effects, smoking continues to pose a serious public health threat, as the number of smokers is increasing steadily # . According to the 2004 Surgeon General’s Report* nearly 70 % of smokers want to stop smoking but less than 5 % who make a quitting attempt are successful. By 2006 smoking cessation products on the market included two types of medications: (1) Nicotine replacement therapy in the form of nicotine gums, inhalers, nasal sprays or transdermal patches, and (2) Treatment with the antidepressant bupropion, acting by attenuating withdrawal symptoms. However, clinical trials have shown that the long-term abstinence rates were only 6-10 % above placebo when using these medications § . Thus, there still is a high need for new therapies. In 2006 Chantix TM (Pfizer Inc.) with varenicline as a new chemical compound, acting by modulating receptor activity in the brain, was approved by the FDA. Even though varenicline exceeds commonly achieved abstinence rates, only 23 % of the smokers remained abstinent after 1 year [Maurer et al., 2007]. However, the sales of Chantix, 883 million dollars in 2007 § , clearly demonstrated the high economic value of new drugs for smoking cessation.
In the recently published CAROLINA study, a prospective, rando- mized, controlled study (observation period approx. 5 years, in each study arm approx. 3000 patients) linagliptin (5 mg/d) and glime- piride (1–4 mg/d) were compared with regard to cardiovascular endpoints, hypoglycaemia and body weight . There was no dif- ference in the comparison of the two study arms for 3P-MACE, 4P- MACE, total and cardiovascular morbidity and mortality with com- parable HbA1c . Weight was more favourable under linagliptin compared to glimepiride (–1.5 kg) and rates of all, moderate, severe and hospitalization for hypoglycemia were significantly lower under linagliptin compared to glimepiride (HR 0.23; 95 % CI 0.21–0.26; p < 0.0001, HR 0.18; 95 % CI 0.15–0.21; p < 0.0001), HR 0.15; 95 % CI 0.08-0.29; p < 0.0001, HR 0.07; 95 % CI 0.02– 0.31; p = 0.0004; resp.). The authors concluded from the data of the CAROLINA study that there are no other reasons than cost reasons to use glimepiride more preferentially than linagliptin in antidiabetic therapy.
Studies on the efficacy of aphasia therapy revealed a clear cut correlation between intensity and duration of therapy and its effi- ciency (e.g., Bhogal et al., 2003 ; Neininger et al., 2004 ). Moreover, in a recent study dealing with the impact of attention therapy on language function in aphasic patients, the authors neither found improvement of attention nor of language functions ( Graf et al., 2011 ), although the same attention training procedure had been shown to be efficient in a couple of studies before (e.g., Sturm et al., 1997, 2003 ; Plohmann et al., 1998 ). The authors discuss the lack of efficiency in their study in the light of training fre- quency leaving the patients with only half of the training time for each approach as compared to former efficacy studies. This sit- uation is quite comparable to our training study where we split total training time between Alertness and OKS training with the consequence of a lack of clearcut functional improvement by the single and only a trend for higher efficacy of the combined train- ing approaches. Thus, the critical parameter of therapy outcome might be total time spent for the training. This hypothesis is cor- roborated by the observation that in our recent study the highest percentage of behavioral improvement and significant functional reorganization was achieved at the end of the OKS training, i.e., at the point in time during our study, when the total training time (summed up for alertness + OKS training) reached the same amount as that for the individual training procedures in our for- mer studies ( Sturm et al., 2004a ; Thimm et al., 2006, 2009 ). The
ALERTNESS RELATED THERAPY APPROACHES OF SPATIAL HEMINEGLECT
The presence and severity of spatial awareness deficits in hem- ineglect seem to depend greatly on the amount of attentional resources available for performance and thus can be strongly influ- enced by task demands (for a review see Bonato, 2012 ). Thus, spa- tial neglect subsequent to right hemisphere lesions often is closely associated with non-spatial deficits of attention like intrinsic alert- ness and sustained attention ( Samuelson et al., 1988 ; Robertson, 1993, 2001 ; Hjaltason et al., 1996 ; Husain and Rorden, 2003 ; Cor- betta and Shulman, 2011 ). Several studies have shown that the degree to which sustained attention is impaired is a strong predic- tor for the persistence of neglect ( Samuelson et al., 1988 ; Robertson et al., 1997 ). The postulated interaction between an anterior alert- ing and a posterior spatial attention network ( Heilman et al., 1978 ; Posner and Petersen, 1990 ; Fernandez-Duque and Posner, 1997 ; Sturm et al., 2006a ) directly leads to the hypothesis that training of alertness may improve spatial neglect in right hemisphere stroke
Hepatocellular carcinoma (HCC) is the second most common cause of cancer related death worldwide, only surpassed by lung cancer. Late diagnosis and a high degree of chemoresistance lead to a poor survival prognosis for HCC patients, with a 5 year survival rate of only 5%. The only approved first line therapy for late stage HCC patients is the multityrosine kinase inhibitor Sorafenib. Clinical trials confirmed, that Sorafenib treatment led to a survival benefit of 3 months, however treatment efficacy is limited by poor response rates, numerous adverse effects and evasive cancer cell signaling. Especially the compensatory activation of growth factor receptor signaling is a major problem restricting the clinical benefit of Sorafenib. Therefore the search for new therapeutic options to improve the efficacy of Sorafenib is of great importance.
The virological response to a therapy was dichotomized into success and failure. For the present analysis, which focuses on the intrinsic value of alternative input repre- sentations, this allows us to compare similarities and differences between the various representations in the most intuitive way. In this study, our main focus is on a genotype- centric standard datum defined by the EuResist consortium ( Zazzi , 2006 ). This defini- tion is as close as possible to the notion of “clinical”, as opposed to “phenotypic” drug resistance (cf. Section 5.3.4 ). A diagrammatic representation is shown in Figure 5.7 . Any available genotype is considered as evidence of a failing regimen, because, in general, sequencing can only be performed if the virus load exceeds ≈ 1, 000 copies per ml. Intuitively, this means that the drug combination at the time of sequencing must be considered a bad therapy for this particular genotype (otherwise sequencing would not have been possible). Successful regimens are defined by inspecting thera- pies that follow a failure: If the virus is undetectable at least once during the course of the follow-up therapy and if the most recent sequencing was performed no earlier than three months before starting the therapy, then the respective treatment was considered a success. If multiple genotypes are available, the most recent sequence sample before the onset of therapy was used. While this definition is most appropriate for the question at hand, we also performed all experiments in parallel for the very different “classical” standard datum definition of the EuResist project ( Zazzi , 2006 ). Briefly, in the classi- cal definition, success or failure is defined based on the viral load measurement that is closest to 56 days after onset of therapy, among those available within 28 to 84 days af- ter onset. If that value is below 500 copies per ml, the therapy is considered a success, otherwise a failure.