Two selection factors play the most important roles in choosing the optimal satellite mission scenarios: (i) the performance of the mission in retrieving the geophysical signals, and (ii) technical and stability issues connected with the mission. From a technical viewpoint, the missions are chosen by the altitude not less than 290 km, while from the view of geodetic sensitivity an orbit height not larger than 320 km is preferable. That is a trade-off between higher sensitivity to short wavelength phenomena by lower altitude and a shorter mission life time due to a larger atmospheric drag force. This decision is due to the expectation that futuresatellitemissions will benefit from drag-free technology like GOCE which allows the mission to fly at lower altitudes (Marchetti et al., 2008; St Rock et al., 2006; Wiese et al., 2011b). Furthermore, an intersatellite distance of 100 km of an inline formation equipped with laser interferometry is chosen as a trade-off between instrument performance and rel- ative accuracy in determining short wavelengths features in the gravityfield (Wiese et al., 2009). The stability problem with Pendulum and Cartwheel formations as well as the laser in- terferometry pointing issue limit the choices to inline formations and conservative Pendulum formations with small opening angle (GFO). However, due to the higher performance of the GFO formation compared to the inline configuration, the GFO would be a favorite scenario for a single pair satellite mission. The scenario is chosen on a repeat orbit of β/α = 507/32 which shows a good performance for 6-day recovery (Table 6.1). For dual satellite pairs, two different formation scenarios are selected:
The new satellitemissions with their innovative observation principles, as described in Section 2.1, in combi- nation with the tailored analysis strategies, as introduced in Section 2.2.1, have provided an unprecedented increase in the accuracy of the gravity eld solutions. So far, the most common way of representing the grav- ity eld of the Earth has been an expansion in terms of spherical harmonics. Despite the outstanding results already achieved, it can be assumed that the signal content present in the satellite observations has not been fully exploited yet. The reasons for that are manifold, but one major aspect is the insucient modeling of background forces (such as ocean tides or atmospheric variations), which is understood to be primarily responsible for the fact that, e.g., the projected GRACE baseline accuracy has not been achieved so far. Yet another reason for sub-optimal signal exploitation could be an insucient modeling of the satellite data by a global representation by means of spherical harmonics. To extract the signal information present in the satellite and sensor data to full content, it seems reasonable to tailor the analysis process according to the specic characteristics of the gravity signal present in certain areas. Especially in the higher frequency part of the spectrum, the gravity eld features vary signicantly in dierent geographical regions. Those hete- rogeneities are caused by dierent topographic characteristics featuring rough gravity signal, for example caused by mountain areas or deep sea trenches, and rather smooth signal areas, for example in parts of the open oceans. In these cases, the heterogeneity of the gravity eld cannot properly be taken into account with the help of spherical harmonics as basis functions with global support. Their resolution can only be dened globally, resulting in the problem that the maximum degree adequate for very rough gravity eld features would cause instabilities in the computation
The development of the AVANTI experiment represents a further step in the German Aerospace Center (DLR) roadmap to enhance the expertise in the field of noncooperative rendezvous. A preliminary activity is represented by the Formation Re-acquisition experiment performed in 2011, at the end of the nominal mission timeline of PRISMA, in preparation of the operations re-handover from the German Space Operations Center (GSOC) back to the OHB Sweden facilities . Afterwards, during the extended phase of the PRISMA mission (April 2012), DLR executed the Advanced Rendezvous Demonstration using GPS and Optical Navigation (ARGON) experiment . In this occasion, the maneuvering spacecraft of the PRISMA duo accomplished a noncooperative approach towards the client from 30 to 3 km of inter-satellite separation over one week. The relative navigation system exploited solely the angles-only measurements coming from the star-tracker vision-based sensor. Then, a dedicated ground-based flight dynamics system carried out the routine processing of the camera images collected onboard, for the estimation of the relative orbit of the servicer with respect to the client vehicle, and for accomplishing the maneuver planning. The availability of independent and precise navigation information from carrier-phase differential GPS techniques has been exploited post-facto to properly evaluate the achieved performance, after the conclusion of the technology demonstration.
Colombo-Nyquist rule enables higher temporal resolution of gravity recoveries, while the high noise level by spatial aliasing of the sub-Nyquist solutions can be dealt by post-processing methods. That means for a single pair satellite mission, an almost good gravity recovery of 6-day for maximum degree 90 is achievable (provided that the ground-track coverage is ho- mogeneous enough). By employing two satellite pairs, one in near-polar orbit and the other in an inclined orbit, this is even achievable by 3-day solutions according to an interpretation of the modified CNR for dual formation missions. Strictly speaking, the latter solution is actually of higher quality. The reason is that an inclined formation rather increases isotropy by adding East-West measurement components instead of only doubling the amount of sam- ples. Moreover, the 3-day solution benefits from higher temporal resolution and consequently less temporal aliasing. An important benefit of having such short time-interval solutions is that they can be applied as dealiasing products (e.g. Wiese et al., 2011a) independent from state of the art geophysical models when aiming of time-variable gravity recoveries of longer time spans, e.g. monthly solutions. Comparing the 6-day recoveries of single inline satellitemissions and 6-day recoveries of two pairs missions implies at least 10 times improvement by employing dual satellitemissions. Obviously, the quality improvement depends on the altitude, but no significant correlation between the repeat orbits of the dual satellite mission scenarios and the quality of the solutions has been detected. For the dual pair satellite mis-
Digital beamforming in a reflector system involves typically only the combination of a low number of array elements. This significantly reduces the on-board proc- essing requirements if compared to a direct radiating ar- ray. For example, only 5 element signals have been com- bined at each instant of time in the conjugate field match- ing simulations of Figure 1. In contrast, a real time scan- on-receive with a 15 m high planar array would require the onboard combination of more than 50 element sig- nals. The increased performance achievable by a large reflector aperture relaxes also the satellite’s thermal, power and energy demands and/or allows for longer op- eration times as desired forfuture Earth system monitor- ing missions which ask for long orbital duty cycles .
The main contributions of this thesis, as well as outlooks, can be summarized as follows: The gravityfield model IGGT_R1 has been generated using the GOCE IGGT by least squares method as described in this thesis. Due to the special characteristics of IGGT, this model avoids errors introduced by inaccurate measurement of the satellite’s attitude during the traditional coordinate transformations. Additionally, according to the principle of least squares, the direct solution method is theoretically strict and there are no grid or integral discretization errors introduced in the harmonic analysis. The linearization error can be neglected if the a-priori gravityfield model is relatively accurate. In such a case the corresponding linearization error is smaller than the accuracy of the GOCE GGs. Using the SCRA in combination with Kaula’s rule of thumb is an effective way to reduce the ill-conditioned problem of the normal equation which primarily helps to improve the low-order coefficients. From numerical analyses of this gravityfield model, it showed that the precision of the obtained gravity anomaly values over Antarctica, South America, Africa, west China and Indonesia are improved due to the GOCE GGs. The RMS differences between GNSS/Leveling data and the model-derived geoid heights show that IGGT_R1 is more precise than SPW_R1 and TIM_R1, and it performs similarly to DIR_R1 and DIR_R2. In comparison to the a-priori gravityfield model EIGEN-5C, the overall accuracy of IGGT_R1 is improved according to the GNSS/Leveling checking results. This represents the contribution of the GOCE GGs, especially in Brazil and China. According to geostrophic velocity speeds in the Agulhas current area, IGGT_R1, which yields more details because of the GOCE GGs, might be an improvement over EIGEN-5C. Considering time-consuming of calculation by the least squares method, IGGT_R1 contains the same GOCE GGs as ESA’s gravityfield models of the First Released . In the future, I plan to calculate a more precise gravityfield model which contains all GGs data of GOCE mission. The conclusion is that high precision gravityfield models can be obtained using the approach as outlined in this thesis. This provides a new direction for building gravityfield models from the GOCE GGs as well as from futuresatellite GGs. Paper 1 (c.f. chapter 2) is my contribution to answer the reseach question 1 (c.f. chapter 1)
HiTeSEM (High-resolution Temperature and Spectral Emissivity Mapping) is a preparatory study, funded by the German Aerospace Center (DLR) that aims preparing the floor for a future spaceborn hyperspectral thermal mission. Thermal remote sensing is poised to become a major source of information on land surface processes. HiTeSEM aims at closing the research gap still hampering utilization of the thermal infrared data at reasonable spectral and spatial resolution and focusses on surface-solid Earth interactions to assess natural and human-induced changes. Land surface temperature (LST) and spectral emissivity (LSE) of the Earth are the basis for the extraction of sensitive variables in geology, pedology, and vegetation monitoring. Towards this end, HiTeSEM will enable the research community to evaluate the potential of emissive spectroscopy methodologies in Earth observation to answer a series of key science questions related to global change, human health, and food security. Relevant target variables include soil mineral composition, soil organic matter (SOM), surface moisture availability, evapotranspiration and stomatal/surface conductance. These are key indicators for soil productivity and plant stress in sensitive regions and can be used to govern and adapt land use practices under challenging ecological and climatic conditions. In urban remote sensing HiTeSEM is expected to furnish important information to define thermal models, which implies knowledge of the surface material composition by means of spectral emissivity retrieval. The methodological challenge of HiTeSEM lies in the development of a robust high performance temperature emissivity separation (TES) technique to allow optimum pre-processing of the measured thermal radiance signal at the sensor level. From these scientific goals a series of mission and instrument requirements has been derived that can be summarized as follows:
Chakrabarti and Kunal [ Chakrabarti 2011 ] presented a work to empirically tackle with noisy tweets to cover the sport events in real time. Their approach focused on recurring events, such as sport tournaments and leagues. They exploited the recurring characteristic of such events to enable their system to learn from previous matches in order to better summarize an ongoing event. They found that their approach can help to build models for the different subevents by using tweets from prior events, and consequently, learn the underlying hidden structure of such events. In summary, the previous work in this stream of research explored understanding the flows and trends of events using shared user-generated short messages (e.g., tweets). We found that such contents are a valuable source of information that can be used for efficient and effective summarizing and understanding events. Moreover, visualizing tweets in the timeline manner and the geographical distribution enabled an appropriate representation of the event. Although these studies support the idea of leveraging shared user-generated content to understand and enrich the spectator and social experiences around events, these works are not focusing on real-time sharing and live communication, which may provide new access to a live event and generates novel experiences. Upon the support of the findings of the prior work discussed above, we investigate real-time sharing experiences during in situ events with a particular focus on mobile user-generated video.
With the RTCM3 data stream the GNSS observations collected at RS are provided to the IMS. In this data stream only observations of the RS are included, for which the Performance Key Identifiers (PKI) are fulfilled. The PKIs are derived during the self-monitoring process on RS side. The IMS itself analyses both, its own measurements and the ones of the RS, which enables the P-DGNSS positioning. The achieved validation results are used to generate the IMS feedback in form of a reference station integrity monitoring (RSIM) message to the RS. The combination of the validation results at the RS (self-monitoring) and IMS (local or far field integrity monitoring) controls then the generation of the final RTCM3+ message. The main difference between the Local Integrity Monitoring (LIM) and the Far Field Integrity Monitoring (FFIM) is the distance between the RS and IMS. The application of FFIM is preferred due to its capability to consider decorrelation effects of GNSS error sources in the coverage area. Though this generic architecture is similar to IALA Beacon DGNSS ([IALA-R-121], [Hoppe-2006]), the transition from C-DGNSS to P-DGNSS requires the specification of extended GBAS operation states and new PKIs under consideration of the hierarchical GBAS data processing supporting P-DGNSS.
Besides the ionosphere one of the most significant problems to achieve an accurate navigation solution in cities with GPS or GALILEO is the multipath reception. Various channel models do exist for ground to ground communications (e.g. COST 207 for the GSM system). But there is still a lack of knowledge for broadband satellite to earth channels . Therefore the German Aerospace Centre (DLR) performed a measurement campaign in 2002. In this campaign we used a Zeppelin to simulate a satellite transmitting a 100 MHz broadband signal towards earth. To ensure a realistic scenario the signal was transmitted between 1460 and 1560 MHz just nearby the GPS L1 band. This signal was received by a measurement van and was recorded using a regular time grid. The so gathered data was then passed through a super resolution algorithm to detect the single reflections. In a further step we tracked the detected reflections in time and gained a knowledge about the characteristics of any isolated reflection. This includes both Doppler shift and delay of the reflection. On top of this we gained knowledge about the direct path behaviour.
size and the overall management of the store/forward mechanism. In particular, much attention has to be also dedicated to the on-board memory actually used to store bundles when the link is not available (e.g., because of cloud coverage). In this light, fading durations in the order of few milliseconds (1-4 ms, according to Table 3) with frequency strictly depending on the data rate can be efficiently contrasted by the ARQ mechanism available within LTP or by the implementation of erasure codes. Actually, the case of 1 ms fading with data rate 100 Mbit/s gives rise to few packet losses (around 10, assuming a packet length of 1024 bytes), which can be easily and efficiently recovered by ARQ schemes. On the contrary, data rate 10 Gbit/s with fade duration 4 ms corresponds to thousands of lost packets (assuming again packet length 1024 bytes), for which the use of erasure codes could be more beneficial also in terms of required on-board storage and timely delivery of data to destination. On the other hand, blockage events which are inherently longer cannot be efficiently coped with by means of ARQ or erasure codes as their requirements cannot be easily accommodated (very large storage units). Under these circumstances, the use of store/forward capabilities as available from the Bundle Protocol is the most promising approach to not degrade the overall system performance and still offer a satisfactory degree of quality of service to the end-users.
Operating this Ground Segment is a significant challenge for the Ground Operations Team at Col-CC, not only due to the vast number of facilities and the related world-wide distribution, but also because of the number of different users (Columbus and ATV flight control, payload facilities, engineering support, PR) with their specific operational needs and constraints. In contrast to previous short duration missions with sequential mission phases, the continuous ISS operations support requires consideration to the current increment execution in parallel with preparation and training of following increment(s) and post increment evaluation. The long lifetime duration of 12+x years requires continuous maintenance and sustaining engineering of the ground segment infrastructure with focus on the life-span of individual components as well as life-cycles of entire technologies. Replacement of equipment or systems must be performed with minimal impact on real-time operations, and in coordination with increment execution/preparation activities. An important component of this structure is the application of human resources. An experienced team of qualified operators and engineers is to be trained to maintain a level of proficiency that is applied over this long period.
Mostly for the benefit of the future operations of the VLBAI, we include the effects of groundwater level changes, atmo- spheric mass change, and Earth’s body and ocean tides in our modelling. This is necessary for the individual gravime- try experiment (and other physics experiments as well) in the VLBAI on the one hand, and for comparing measurements from different epochs, e.g. with different groundwater levels, on the other hand. Previous investigations in the gravimetry laboratory of a neighbouring building showed a linear coeffi- cient of 170 nm /s 2 per meter change in the local groundwater table (Timmen et al. 2008 ). This corresponds to a porosity of >30% of the soil (Gitlein 2009 ). For our model, we adapt a pore volume of 30%, which has to be verified by gravimetric measurements and correlation with local groundwater mea- surements. Two automatic groundwater gauges are available around the building: one installed during the construction work and a second with records dating back several decades also used by Timmen et al. ( 2008 ). The effect of atmospheric mass changes is calculated using the ERA5 atmospheric model provided by the European Centre for Medium-Range Weather Forecasts 7 and the methods described by Schilling ( 2019 ). Tidal parameters are extracted from observational time series (Timmen and Wenzel 1994 ; Schilling and Gitlein
The targets that can be reached by chemical rockets are limited because of massive propellant requirements. With the help of low-thrust gravity-assist trajectories it is possible for a spacecraft to reach longer distances and ex- plore outer limits of the solar system. Exploration missions are becoming more and more ambitious and rely on gravity assist as free energy providers. For many years NASA, ESA and other space agencies have been using this method to extract freely available energy in space. Missions like Voyager, Cassini, Messenger and New Horizons all relied on gravity assists for accomplish- ing their missions. Dawn and Hayabusa missions showcase the benefits of low-thrust gravity-assist combination, even though the gravity assist was not mission critical for Dawn  . The main goal of low-thrust gravity-assist missions is not only to reach distant targets but also decrease the mass of the required propellant, therefore increasing the payload of the mission. The po- tential fuel mass savings and energy benefits makes optimization of low-thrust and gravity-assist combination as one of the exciting research topics.
TRIM and with only EGDMA as crosslinker, the polymer capacity for proteins was up to 10 times lower. The main advantage of the 3D polymer lies in the opportunity to avoid binding site hindrance, which could be present in high density and flat antibody layers. Together with incorrect antibody orientation, this could lead to a high percentage of unavailable binding sites. The tri-dimensional network leads to a random immobilisation of antibodies far from the sensor surface, but still in close proximity to allow the detection of antigen binding (see Fig. 4.1). When the reactivity of the the 3-D polymer was assessed in solution at different pH values (by measuring the fluorescence of the formed isoindole group) a greater response was recorded in basic pHs. Significantly lower fluorescence was observed for the polymer suspensions in acidic medium. The fluorescence intensity, which was recorded 2 minutes after addition of NH 4 OH 6 M, was 615.7 ± 35.7 units at pH 8.0; 512.3 ± 41.0 units at pH 7.4; 64.4 ± 14.8 units at pH 5.0 and 55.2 ± 18.7 units for pH 4.5. The experiments were performed in triplicate. The presence of fluorescence is a proof of the isoindole formation and therefore of the existence of thioacetal groups. The results also demonstrate the suitability of the polymer to perform protein immobilisation at physiological pH, which can be advantageous to avoid protein denaturation. The average molecular weight of the polymer, as determined by GPC was 110 kDa (polystyrene equivalent).
The estimation of potential evaporation has been enhanced to use a moving 1-year heat index. Independently of the prospective temperature distribution of the rest of the year, it is possible to run the simulation until the actual day of the year. Nevertheless, evaporation remains a critical error source in the global mass balance, because it is the only mass flux in the hydrological cycle that is computed within two different models, atmosphere and hydrology. Both models contain their own land surface component but their results are not synchronized. Unfortunately, the land surface module LSXM included in the hydrological model is necessary as long as the atmospheric models, for instance ECMWF, do not provide the rele- vant parameter fields, such as soil moisture and snow accumulation, in the same operational manner as precipitation and temperature. As interim solution, LSXM has been enabled to import directly evaporation rates from an atmospheric model. This ensures a consistent mass conservative water exchange among atmosphere and continental surface. The simulation of evaporation rates with the land sur- face scheme TESSEL (Beljaars & Viterbo, 1999) in the ECMWF weather forecast system benefits notable from the more sophisticated treatment of wind, radiation, and humidity influences compared to the simple temperature based Thornthwaite method in the LSXM. The imported evaporation estimates are higher correlated to local precipitation events. However, water storage distributions still differ be- tween atmospheric and hydrological model estimates. It is not obligatory that the ECMWF evaporation rates fit to surface soil saturation characteristics and lo- cal water holding capacities of LSXM. Anyhow, the total vertical water balance keeps comparable and concerning the global water cycle, the terrestrial hydrology is connected more consistently to the atmosphere.
As mentioned in chapter one the sub-cycle as the time interval between two neighboring tracks, i.e. two ascending or two descending. The sub-cycle is an interesting parameter to measure how fast an orbit reduces the large gap at an arbitrary parallel, e.g. equator, when only as- cending or descending tracks are considered. Envisat has a 35 days repeat orbit with 16 days sub-cycle. Figure B.3 shows all ascending ground tracks of Envisat after 16 days at the equator. Sentinel-3 and SWOT have 27 and 21 days repeat orbit with 4 and 10 days sub-cycles respec- tively. For hydrological purposes an orbit with shorter sub-cycles is preferred because during a short time an altimeter can provide more observations over a given inland water body. As an example figure 2.2 (left panel) shows that Envisat every 16 days (sub-cycle) measures Issykul lake (located in Kyrgyzstan) whereas it’s repeat cycle is 35 days. Therefore we have more al- timetry data over such a lake that is interesting for hydrologists. Another advantage of an orbit with shorter sub-cycle is related to flood management. During the flood seasons we need mea- surements with more temporal resolution to control flooded area. An altimeter with a shorter sub-cycle orbit can capture flood events. Therefore the short sub-cycle could be an advantage in repeat orbit design forsatellite altimetry mission in hydrological applications. Figure B.2 shows the coverage of SWOT (only nadir) and Envisat over Issykul lake during the sub-cycle.