: Manuscript available;  : PDF;  : Webpage;  : Translation;  : Link to publication (DOI)

Expand All | Collapse All


Berger, C.E.H.; Kruijver, M.; Hicks, T.N.; Champod, C.; Taylor, D.; Buckleton, J.S. Commentary on: Joint recommendations of the project group “Biostatistical DNA Calculations” and the Trace Commission on the Biostatistical Evaluation of Forensic DNA Analytical Findings with Fully Continuous Models (FCM). J. Forensic Sci. 2024, 69, 730-735. Abstract


Recently Hahn et al. [1] published the “Joint recommendations of the project group “Biostatistical DNA Calculations” and the Stain Commission on the Biostatistical Evaluation of Forensic DNA Analytical Findings with Fully Continuous Models (FCM).” While the work of the project group and commission to encourage the adoption of FCMs in Germany is to be commended, some of their recommendations are problematic, in our opinion. In this reaction we will point out a number of issues with their analysis of FCM results, and with their recommendations based on that analysis.


Botter, D.; Van Driessche, P.M.I.; Berger, C.E.H. Reactie op Duijst, Geweldhandelingen tegen het hoofd. NTS 2023, 1, 15-19. Abstract


In ‘Feiten van algemene bekendheid betreffende geweldhandelingen tegen het hoofd’ betoogt prof. Duijst dat rechters te gemakkelijk gebruik maken van ‘feiten van algemene bekendheid’ om tot een bewezenverklaring te komen. Het artikel suggereert dat de risico’s van geweld tegen het hoofd daarbij te hoog zouden worden ingeschat.
Naar onze mening is het artikel deels gestoeld op tekortkomingen in de uitleg van de anatomie van het hoofd, de pathofysiologie na geweldsinwerkingen tegen het hoofd en de biomechanische aspecten. Hierdoor wordt onterecht geconcludeerd of minstens gesuggereerd dat de kans op ernstige of levensbedreigende schade door geweld tegen het hoofd laag zou zijn.


De Boer, H.H.; Fronczek, J.; Berger, C.E.H.; Sjerps, M. The logic of forensic pathology opinion. Int. J. Legal Med. 2022, 136, 1027-1036. Abstract


Evaluating evidence and providing opinions are at the heart of forensic science, and forensic experts are expected to provide opinions that are based on logically sound and transparent scientific reasoning, and that honour the boundaries of their area of expertise. In order to meet these objectives, many fields of science explicitly apply Bayes’ theorem, which describes the logically correct way to update probabilities on the basis of observations. Making a distinction between ‘investigative’ and ‘evaluative’ modes of operating helps to implement the theorem into daily casework. Use of these principles promotes the logic and transparency of the reasoning that leads to expert’s opinion and helps the expert to stay within her remit. Despite these important benefits, forensic pathology seems slow to adopt these principles. In this article, we explore this issue and suggest a way forward. We start with a short introduction to Bayes’ theorem and its benefits, followed by a discussion of why its application is actually second nature to medical practitioners. We then discuss the difference between investigative and evaluative opinions, and how they enable the forensic pathologist to reconcile Bayes’ theorem with the different phases of a forensic investigation. Throughout the text, practical examples illustrate the various ways in which the logically correct way of evidence interpretation can be implemented, and how it may help the forensic pathologist to provide an appropriate and relevant opinion.


De Boer, H.H.; Berger, C.E.H.; Blau, S. Providing a Forensic Expert Opinion on the “Degree of Force”: A Discussion. Biology 2021, 10, 1336. Abstract


Forensic pathologists and anthropologists are often asked in court for an opinion about the degree of force required to cause a specific injury. This paper examines and discusses the concept of ‘degree of force’ and why it is considered a pertinent issue in legal proceedings. This discussion identifies the implicit assumptions that often underpin questions about the ‘degree of force’. The current knowledge base for opinions on the degree of force is then provided by means of a literature review. A critical appraisal of this literature shows that much of the results from experimental research is of limited value in routine casework. An alternative approach to addressing the issue is provided through a discussion of the application of Bayes’ theorem, also called the likelihood ratio framework. It is argued that the use of this framework makes it possible for an expert to provide relevant and specific evidence, whilst maintaining the boundaries of their field of expertise.

Mattijssen, E.J.A.T.; Witteman, C.L.M.; Berger, C.E.H.; Stoel, R.D. Firearm Examination: Examiner Judgments and Computer-Based Comparisons. J. Forensic Sci. 2021, 66, 96-111. Abstract


Forensic firearm examination provides the court of law with information about the source of fired cartridge cases. We assessed the validity of source decisions of a computer-based method and of 73 firearm examiners who compared breechface and firing pin impressions of 48 comparisons sets. We also compared the computer-based method’s comparison scores with the examiners’ degree-of-support judgments and assessed the validity of the latter.
The true positive rate (sensitivity) and true negative rate (specificity) of the computer-based method (for the comparison of both the breechface and firing pin impressions) were 94.4 % and at least 91.7 %, respectively. For the examiners, the true positive rate was at least 95.3 % and the true negative rate was at least 86.2 %. The validity of the source decisions improved when the evaluations of breechface and firing pin impressions were combined and for the examiners also when the perceived difficulty of the comparison decreased. The examiners were reluctant to provide source decisions for ‘difficult’ comparisons even though their source decisions were mostly correct. The correlation between the computer-based method’s comparison scores and the examiners’ degree-of-support judgments was low for the same-source comparisons to negligible for the different-source comparisons. Combining the outcomes of computer-based methods with the judgments of examiners could increase the validity of firearm examinations.
The examiners’ numerical degrees of support judgments for their source decisions were not well-calibrated and showed clear signs of overconfidence. We suggest studying the merits of performance feedback to calibrate these judgments.


Robertson, B.; Buckleton, J.S.; Evett, I.W.; Berger, C.E.H. Letter to the Editor, Response to Stiffelman. Crim. L.R. 2020, 1156-1159. Abstract


An article published last year is receiving unfortunate attention in legal media outlets. Bess Stiffelman Esq in “No Longer the Gold Standard: Probabilistic Genotyping is Changing the Nature of DNA Evidence in Criminal Trials” (2019) 24 Berkeley J. Crim. L. 110 runs four main arguments attacking the use of likelihood ratios (LRs) for evaluating DNA evidence and reporting the analysis in court:
(1) LRs infringe the ultimate issue rule.
(2) LRs infringe on the presumption of innocence.
(3) LRs require that the propositions compared be exhaustive.
(4) Traditional DNA evidence is not subject to the same criticisms.
None of these arguments is new; all have reared their heads and been demonstrated to be baseless several times in the last 30 years. There is no sign that Stiffelman or the referees were familiar with the extensive literature. A full response to Stiffelman by the authors of this letter and others is published at (2020) 310 Forensic Science International 110251. It is important that any lawyers interested in Stiffelman’s arguments read the response. These arguments have also been dealt with in numerous other sources such as B. Robertson and G.A. Vignaux, Interpreting Evidence: Evaluating Forensic Science in the Courtroom (Chichester: John Wiley & Sons, 1995) and second edition with Charles Berger (2016).
We deal with these arguments in turn.

Mattijssen, E.J.A.T.; Witteman, C.L.M.; Berger, C.E.H. Stoel, R.D. Assessing the frequency of general fingerprint patterns by fingerprint examiners and novices. Forensic Sci. Int. 2020, 313, 110347. Abstract


The rarity of general fingerprint patterns should be taken into account in the assessment of fingerprint evidence to provide a more complete assessment of fingerprint evidence than when only considering the minutiae. This should be done because, the rarer the corresponding pattern, the stronger the support for the hypothesis that the fingermark stems from the same source as the reference fingerprint. Fingerprint examiners’ experience should enable them to provide meaningful assessments of the frequencies of these general patterns according to the theories of perceptual learning, exemplar theory of categorization and visual statistical learning. In this study we examined the accuracy of fingerprint examiners’ and novices’ judgments on the rarity of general fingerprint patterns.
We found that fingerprint examiners seem to have acquired some knowledge about the rarity of general patterns, but had difficulty expressing this knowledge quantitatively using a novel sub-classification of general patterns. As a consequence, their judgments were not accurate and they did not perform better on this task than novices. For both participant groups judgments of more common patterns were more accurate. However, examiners did outperform novices in rank ordering general patterns from common to rare. We conclude that our study does not show that fingerprint examiners have expertise in explicitly judging frequencies of novel sub-classifications of general fingerprint patterns, but our results do indicate that the examiners have acquired knowledge about the rarity of patterns that novices do not possess.

Ramos, D.; Meuwly, D.; Haraksim, R.; Berger, C.E.H. Chapter 7: Validation of forensic automatic likelihood ratio methods. In: Banks, D.; Kafadar, K.; Kaye, D.; Tackett, M. (eds) Handbook of Forensic Statistics, 2020, Chapman & Hall/CRC Handbooks of Modern Statistical Methods. cover Abstract


In forensic evidence evaluation practitioners assign a strength of evidence to forensic observations and analytical results, in order to address hypotheses at source or activity level. This assignment is based on the practitioner’s assessment and increasingly on the computations of automatic likelihood ratio (LR) methods.
This chapter focuses on the validation of automatic methods developed to assign a strength of evidence at source level to the analytical results originating from the comparison of distinctive features of 2 specimens: a trace or mark of an unknown source and a reference specimen of a known source. Usually, a trace or mark is produced under the uncontrolled conditions of a criminal activity while a reference specimen is produced under controlled and more ideal conditions.
We will review some of the performance characteristics needed to accomplish any validation process, and we will give special attention to the calibration of likelihood ratios, because of its importance and its relative novelty in forensic interpretation. Throughout this chapter, we will follow a Bayesian interpretation of probability, and the recent guideline for evaluative reporting in forensic science in Europe.

Berger, C.E.H.; Van Wijk, M.; De Boer, H.H. Chapter 6.1: Bayesian inference in personal identification. In: Obertová, Z.; Cattaneo, C.; Stewart, A. (eds) Statistics in Forensic Anthropology, 2020, Elsevier. cover Abstract


Providing an opinion on (elements of) the identity of heavily decomposed, often skeletonized remains is generally the mainstay of a forensic anthropologist’s daily work. Often, forensic anthropologists accompany such opinions with a measure of uncertainty, such as a confidence interval. Such statements, however, give posterior probabilities without taking into account all the other (non-anthropological) evidence and information in the case. The application of logic, through the use of Bayes’ theorem can provide a solution for this issue. This chapter explores how a Bayesian approach can be applied to interpret features observed during the examination of skeletal identifiers. It specifically focusses on two basic elements of the forensic anthropological biological profile; one with a binary outcome (sex estimation) and one with a categorical or continuous outcome (age estimation). Besides others, the formulation of propositions, the calculation of likelihood ratios, the choice of reference data, and the combination of evidence are discussed.

De Boer, H.H.; Van Wijk, M.; Berger, C.E.H. Chapter 6.3: Communicating evidence, with focus on the use of Bayes' theorem. In: Obertová, Z.; Cattaneo, C.; Stewart, A. (eds) Statistics in Forensic Anthropology, 2020, Elsevier. cover Abstract


Forensic science is only a meaningful endeavour when its findings are communicated as clearly and unambiguous as possible, thereby facilitating a good evaluation of the presented evidence. Due to their specialist knowledge, forensic scientists have a duty to help the correct understanding of their conclusions by the receiving party. This chapter aims to aide the forensic anthropologist in this task. It focuses on explaining reporting strength of evidence using Bayes’ theorem. This chapter also touches on the difference between investigative and evaluative reporting and the use of verbal scales. Although the chapter is geared toward forensic anthropologists it is also applicable to other forensic scientists.

Berger, C.E.H.; De Boer, H.H.; Van Wijk, M. Chapter 3.2: Use of the Bayes’ theorem in data analysis and interpretation. In: Obertová, Z.; Cattaneo, C.; Stewart, A. (eds) Statistics in Forensic Anthropology, 2020, Elsevier. cover Abstract


Making questions explicit in propositions, following the laws of logic, and basing the answers on information and expertise are the fundamental principles of forensic interpretation. Additionally, they define the role of the forensic scientist in the criminal justice system. Given the importance of contextual information for the interpretation of evidence, this chapter explains when that information is task-relevant and when it is task-irrelevant and should be kept from the examining scientist. Deviating from logic results in errors of reasoning which are identified, as are conclusions that suggest more than they actually mean. And finally, guidance is offered for the phrasing of valid and useful propositions, that help to address the most relevant issues in the case.

Kokshoorn, B.; De Koeijer, J.A.; Aarts, B.; Blankers, B.J.; Matas Llonch, T.; Berger, C.E.H. Scenario’s, hypothesen, aannamen en context-informatie; wat bedoelt de deskundige eigenlijk? Expertise en Recht 2020, 3, 96-104. Abstract


Op verzoek van de opdrachtgever kunnen forensisch deskundigen de resultaten van een onderzoek naar de aard of herkomst van sporen evalueren, gegeven hypothesen op activiteitniveau. Om een dergelijke evaluatie uit te kunnen voeren, heeft de deskundige, naast een heldere vraagstelling, informatie nodig over de feiten en omstandigheden van de zaak. De nadruk ligt hierbij op de handelingen die door de betrokken personen zouden zijn uitgevoerd. Deze informatie zal de deskundige ontleden in hypothesen waaraan de onderzoeksresultaten worden getoetst, in aannamen en in niet-betwiste contextinformatie. In dit artikel laten we zien waarom en hoe de deskundige dit doet, welke denkstappen hier achter zitten, en wat daarvoor nodig is. Dit doen we aan de hand van voorbeelden uit de praktijk van het forensisch humaan DNA-onderzoek.

Buckleton, J.S.; Bright, J.-A.; Robertson, B.; Curran, J.M.; Berger, C.E.H.; Taylor, D.; Hicks, T.N.; Gittelson, S.; Evett, I.W.; Pugh, S.N.; Jackson, G.; Kelly, H.; Tim Kalafut, T.S.; Bieber, F.R. A review of Likelihood Ratios in Forensic Science based on a critique of Stiffelman “No Longer the Gold Standard: Probabilistic Genotyping is Changing the Nature of DNA Evidence in Criminal Trials”. Forensic Sci. Int. 2020, 307, 110251. Abstract


Stiffelman [1] gives a broad critique of the application of likelihood ratios (LRs) in forensic science, in particular their use in probabilistic genotyping (PG) software. These are discussed in this review.
LRs do not infringe on the ultimate issue. The Bayesian paradigm clearly separates the role of the scientist from that of the decision makers and distances the scientist from comment on the ultimate and subsidiary issues.
LRs do not affect the reasonable doubt standard. Fact finders must still make decisions based on all the evidence and they must do this considering that all evidence, not just that given probabilistically.
LRs do not infringe on the presumption of innocence. The presumption of innocence does not equate with a prior probability of zero but simply that the POI is no more likely than anyone else to be the donor.
Propositions need to be exhaustive within the context of the case. That is, propositions deemed relevant by either defense or prosecution which are not fanciful must not be omitted from consideration.

Mattijssen, E.J.A.T.; Witteman, C.L.M.; Berger, C.E.H.; Stoel, R.D. Cognitive Biases in the Peer Review of Bullet and Cartridge Case Comparison Casework: A Field Study. Sci. Justice 2020, 60, 337-346. Abstract


Forensic judgments and their peer review are often the result of human assessment and are thus subjective and prone to bias. This study examined whether bias affects forensic peer review. We hypothesized that the probability of disagreement between two forensic examiners about the proposed conclusion would be higher with “blind” peer review (reviewer saw only the first examiner’s comparison photos) than with “non-blind” peer review (reviewer also saw the first examiner’s interpretation and proposed conclusion). We also hypothesized that examiners with a higher perceived professional status would have a larger effect on the reported conclusion than examiners with a lower status. We acquired data during a non-blind and a blind peer review procedure in a naturalistic, covert study with eight examiners (3-26 years of experience). We acquired 97 conclusions of bullet and cartridge case comparisons in the blind and 471 in the non-blind peer review procedure. The odds of disagreement between examiners about the evidential strength of a comparison were approximately five times larger (95%-CI [3.06, 8.50]) in the blind than in the non-blind procedure, with disagreement about 12.5% and 42.3% of the proposed conclusions, respectively. Also, the odds that their proposed conclusion was reported as the final conclusion were approximately 2.5 higher for the higher-status examiners than for lower-status examiners. Our results support both the hypothesis that bias occurs during non-blind forensic peer review and the hypothesis that higher-status examiners determine the outcome of a discussion more than lower-status examiners. We conclude that blind peer review may reduce the probability of bias and that status effects have an impact on the peer reviewing process.

Mattijssen, E.J.A.T.; Witteman, C.L.M.; Berger, C.E.H.; Brand, N.W.; Stoel, R.D. Validity and reliability of forensic firearm examiners. Forensic Sci. Int. 2020, 307, 110112. Abstract


Forensic firearm examiners compare the features in cartridge cases to provide a judgment addressing the question about their source: do they originate from one and the same from two different firearms? In this article, the validity and reliability of these judgments is studied and compared to the outcomes of a computer-based method. The features we looked at were the striation patterns of the firing pin aperture shear marks of four hundred test shots from two hundred Glock pistols, which were compared by a computer-based method. Sixty of the resulting 79800 comparisons were shown to 77 firearm examiners. They were asked to judge whether the cartridge case had the same source or a different source, and to indicate the degree of support the evidence provided for those judgments.
The results show that the true positive rates (sensitivity) and the true negative rates (specificity) of firearm examiners are quite high. The examiners seem to be slightly less proficient at identifying same-source comparisons correctly, while they outperform the used computer-based method at identifying different-source comparisons.
The judged degrees of support by examiners who report likelihood ratios are not well-calibrated. The examiners are overconfident, giving judgments of evidential strength that are too high. The judgments of the examiners and the outcomes of the computer-based method are only moderately correlated.
We suggest to implement performance feedback to reduce overconfidence, to improve the calibration of degree of support judgments, and to study the possibility of combining the judgments of examiners and the outcomes of computer-based method to increase the overall validity.

De Koeijer, J.A.; Sjerps, M.J.; Vergeer, P.; Berger, C.E.H. Combining evidence in complex cases - a practical approach to interdisciplinary casework. Sci. Justice 2020, 60, 20-29. Abstract


Activity level evaluations, although still a major challenge for many disciplines, bring a wealth of possibilities for a more formal approach to the evaluation of interdisciplinary forensic evidence. This paper proposes a practical methodology for combining evidence from different disciplines within the likelihood ratio framework. Evidence schemes introduced in this paper make the process of combining evidence more insightful and intuitive thereby assisting experts in their interdisciplinairy evaluation and in explaining this process to the courts.
When confronted with two opposing scenarios and multiple types of evidence, the likelihood ratio approach allows experts to combine this evidence in a probabilistic manner. Parts of the prosecution and defence scenarios for which forensic science is expected to be informative are identified. For these so called core elements, activity level propositions are formulated. Afterwards evidence schemes are introduced to assist the expert in combining the evidence in a logical manner. Two types of evidence relations are identified: serial and parallel evidence. Practical guidelines are given on how to deal with both types of evidence relations when combining the evidence.


Robertson, B.; Berger, C.E.H. Interpreting evidence of torture. Medical Law Review 2019, 27, 687–695. Abstract


The Istanbul Protocol provides a scheme for giving evidence of signs of torture. This scheme does not conform with the principles of logical inference, revolving as it does round the concept of “consistency”. The shortcomings of the Protocol are explained using the evidence given in the recent case of KV(Sri Lanka) and the logical approach to such evidence explained.

Twisk, K.; Dubelaar, M.J.; Berger, C.E.H. De kenniskloof verkend: een onderzoek naar de waardering van (complex) DNA-bewijs in strafzaken; Exploring the knowledge gap: an investigation into the valuation of (complex) DNA evidence in criminal cases. Expertise en Recht 2019, 3, 105-111. Abstract


Valuation of evidence has been a hot issue in criminal cases for years, as research shows that judges and other litigants have great difficulty correctly interpreting evidence. The literature shows what errors of reasoning the judge can make in this respect, but it is not clear how often they actually occur (or still occur). In recent years, quite some initiatives have already been taken to improve the quality of reports and the understanding of these by parties in the criminal process. This study looked at the quality of court rulings with regard to the valuation of DNA mixture profiles. In 27 of the 78 cases investigated (35%), the evidential value of the reported DNA examination was misinterpreted and the so-called prosecutor's fallacy was committed. In a further 28 of the 78 cases, it was found that the judge was hiding in vague or incomplete wording (36%). This raises the question of whether, and if so how, the understanding of judges and the quality of judicial motivations of judgements can be (further) improved.

De waardering van bewijs vormt al jaren een heet hangijzer in strafzaken zoals blijkt uit onderzoek dat laat zien dat rechters en andere procesdeelnemers grote moeite hebben bewijsmateriaal op juiste wijze te interpreteren. Uit de literatuur komt naar voren welke redeneerfouten de rechter daarbij kan maken, maar niet duidelijk is hoe vaak deze nu eigenlijk (nog) voorkomen. De afgelopen jaren zijn al de nodige initiatieven genomen ter verbetering van de kwaliteit van rapportages en het begrip hiervan door partijen in het strafproces. In dit onderzoek is gekeken naar de kwaliteit van de rechterlijke uitspraken ten aanzien van de waardering van DNA-mengprofielen. Daarbij is in 27 van de 78 onderzochte zaken (35%) de bewijskracht van het gerapporteerde DNA-onderzoek verkeerd geïnterpreteerd en is er sprake van de zogenaamde prosecutor’s fallacy. In nog eens 28 van de 78 zaken is geconstateerd dat de rechter zich hult in vage of onvolledige formuleringen (36%). Dit roept de vraag op of, en zo ja hoe, het begrip van rechters en de kwaliteit van de rechterlijke motiveringen van uitspraken (verder) kunnen worden verbeterd.


Ton, E.; Limborgh, J.; Aarts, B.; Kokshoorn, B.; de Koeijer, J.A.; de Keijser, J.; Berger, C.E.H.; Zuidberg, M. Plaats delict-onderzoek met vooruitziende blik; Crime scene investigation with foresight. Expertise en Recht 2018, 4, 144-149. Abstract


Als sporenmateriaal van een verdachte aangetroffen wordt op een plaats delict, volgt hieruit niet noodzakelijkerwijs dat de verdachte de dader is van het delict. Een verdachte kan een alternatief scenario aandragen ten aanzien van de aanwezigheid van het sporenmateriaal of ten aanzien van zijn betrokkenheid bij het delict. Het is niet makkelijk om op nog onbekende scenario’s van een nog onbekende verdachte te anticiperen tijdens het forensisch sporenonderzoek. Toch is dit zeer waardevol. Als dergelijke alternatieven niet worden beschouwd tijdens het forensisch sporenonderzoek bestaat de kans dat het scenario van de verdachte, in relatie tot het schuldscenario, in een later stadium niet meer getoetst kan worden. In dit artikel laten wij zien dat het mogelijk is om op een gestructureerde en onderbouwde wijze op de plaats delict te anticiperen op mogelijke scenario’s van een verdachte. Anticiperen op scenario’s kan een belangrijke bijdrage leveren aan het forensisch sporenonderzoek ten behoeve van de waarheidsvinding in het strafproces.

If trace material of a suspect is found at a crime scene, it does not necessarily follow that the suspect is the perpetrator of the crime. A suspect can put forward an alternative scenario with regard to the presence of the trace material or with regard to his involvement in the crime. It is not easy to anticipate scenarios of a still unknown suspect during forensic trace examination. Nevertheless, this is very valuable. If such alternatives are not considered during forensic trace examination, there is a chance that the suspect’s scenario, in relation to the crime scenario, cannot be tested at a later stage. In this article we show that it is possible to anticipate possible scenarios of a suspect in a structured and well-founded manner at the crime scene. Anticipating scenarios can make an important contribution to forensic examination for the purpose of finding the truth in criminal proceedings.

Gittelson, S.; Berger, C.E.H.; Jackson, G.; Evett, I.W.; Champod, C.; Robertson, B.; Curran, J.M.; Taylor, D.; Weir, B.S.; Coble, M.D.; Buckleton, J.S. A response to “Likelihood ratio as weight of evidence: A closer look” by Lund and Iyer. Forensic Sci. Int. 2018, 288, e15-e19. Abstract


Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based.

Kerkhoff, W.; Stoel, R.D.; Mattijssen, E.J.A.T.; Berger, C.E.H.; Didden, F.W.; Kerstholt, J.H. A part-declared blind testing program in firearms examination. Sci. Justice 2018, 58, 258-263. Abstract


In 2015 and 2016 the Central Unit of the Dutch National Police created and submitted 21 cartridge case comparison tests as real cases to the Netherlands Forensic Institute (NFI), under supervision of the University of Twente (UT). A total of 53 conclusions were drawn in these 21 tests. For 31 conclusions the underlying ground truth was ‘positive’, in the sense that it addressed a cluster of cartridge cases that was fired from the same firearm. For 22 conclusions the ground truth was ‘negative’, in the sense that the cartridge cases were fired from different firearms. In none of the conclusions, resulting from examinations under casework conditions, misleading evidence was reported. All conclusions supported the hypothesis reflecting the ground truth. This article discusses the design and results of the tests in more detail.

Berger, C.E.H.; Stoel, R.D. Response to “A study of the perception of verbal expressions of the strength of evidence”. Sci. Justice 2018, 58, 76-77. Abstract


We would like to respond to the recent paper “Understanding forensic expert evaluative evidence: A study of the perception of verbal expressions of the strength of evidence”, by Arscott et al..
We agree that a verbal expression of the strength of evidence can be interpreted in varying ways. Not only by the people that read them, but also by those that express them. It is also possible that different verbal expressions are interpreted in the same way by different readers or reporting scientists. This is the reason that the Association of Forensic Science Providers (AFSP) and the European Network of Forensic Science Institutes (ENFSI) have published guidelines that call for forensic institutes to provide verbal scales and numerically define the verbal expressions therein. This, at least formally, solves the issue of the perception of intended strength of evidence.


Slooten, K.; Berger, C.E.H. Response paper to “The likelihood of encapsulating all uncertainty”: The relevance of additional information for the LR. Sci. Justice 2017, 57, 468-471. Abstract


In this response paper, part of the Virtual Special Issue on “Measuring and Reporting the Precision of Forensic Likelihood Ratios”, we further develop our position on likelihood ratios which we described previously in Berger et al. (2016) “The LR does not exist”. Our exposition is inspired by an example given in Martire et al. (2016) “On the likelihood of encapsulating all uncertainty”, where the consequences of obtaining additional information on the LR were discussed. In their example, two experts use the same data in a different way, and the LRs of these experts change differently when new data are taken into account. Using this example as a starting point we will demonstrate that the probability distribution for the frequency of the characteristic observed in trace and reference material can be used to predict how much an LR will change when new data become available. This distribution can thus be useful for such a sensitivity analysis, and address the question of whether to obtain additional data or not. But it does not change the answer to the original question of how to update one’s prior odds based on the evidence, and it does not represent an uncertainty on the likelihood ratio based on the current data.

Kokshoorn, B; Blankers, B.J.; De Zoete, J.C.; Berger, C.E.H. Activity level DNA evidence evaluation: on propositions addressing the actor or the activity. Forensic Sci. Int. 2017, 278, 115-124. Abstract


More often than not, the source of DNA traces found at a crime scene is not disputed, but the activity or timing of events that resulted in their transfer is. As a consequence, practitioners are increasingly asked to assign a value to DNA evidence given propositions about activities provided by prosecution and defense counsel. Given that the dispute concerns the nature of the activity that took place or the identity of the actor that carried out the activity, several factors will determine how to formulate the propositions. Determining factors are (1) whether defense claims the crime never took place, (2) whether defense claims someone other than the accused (either an unknown individual or a known person) performed the criminal activity, and (3) whether it is claimed and disputed that the suspect performed an alternative, legitimate activity or has a relation to the victim, the object, or the scene of crime that implies a legitimate interaction. Addressing such propositions using Bayesian networks, we demonstrate the effects of the various proposition sets on the evaluation of the evidence.

Evett, I.W.; Berger, C.E.H.; Buckleton, J.S.; Champod, C.; Jackson, G. Finding the Way Forward for Forensic Science in the US - A commentary on the PCAST report. Forensic Sci. Int. 2017, 278, 16-23. Abstract


A recent report by the US President’s Council of Advisors on Science and Technology (PCAST) has made a number of recommendations for the future development of forensic science. Whereas we all agree that there is much need for change, we find that the PCAST report recommendations are founded on serious misunderstandings. We explain the traditional forensic paradigms of match and identification and the more recent foundation of the logical approach to evidence evaluation. This forms the groundwork for exposing many sources of confusion in the PCAST report. We explain how the notion of treating the scientist as a black box and the assignment of evidential weight through error rates is overly restrictive and misconceived. Our own view sees inferential logic, the development of calibrated knowledge and understanding of scientists as the core of the advance of the profession.

Sjerps, M.; Kloosterman, A.; Berger, C.E.H. Over de rapportage van het NFI: een weerwoord. Nederlands Juristenblad 2017, 40, 2945-2951. Abstract


Recent leverde rechtspsycholoog prof. Peter van Koppen in een artikel in dit blad stevige kritiek op rapportages van het Nederlands Forensisch Instituut (NFI). Hierbij werd één specifiek NFI-rapport over een speciaal type hamer, een tengelhamer, als voorbeeld gebruikt. In dit artikel bespreken wij deze kritiek. Kortgezegd: wij zien in het artikel van Van Koppen geen aanleiding tot aanpassing van de rapporten of het starten van nog een onderzoek naar de NFI-rapportage. De weg die Van Koppen voorstaat is wetenschappelijk gezien achterhaald en om meerdere redenen verworpen. Wat betreft het verdedigingsbelang bij verkorte DNA-rapportages heeft Van Koppen echter een punt: er zou vaker een uitgebreide rapportage kunnen worden aangevraagd. Het standaard toevoegen van DNA-profielen aan het NFI-rapport stuit op privacybezwaren en vereist een wetswijziging.

Groen, W.J.M.; Berger, C.E.H. Crime Scene Investigation, Archaeology and Taphonomy: Reconstructing Activities at Crime Scenes. In: Schotmans, E.; Márquez-Grant, N.; Forbes, S. (ed) Taphonomy of human remains: Forensic analysis of the dead and the depositional environment, 2017, Wiley-Blackwell. cover Abstract


Archaeologists make use of analytical reasoning when reconstructing the past from excavated material finds and features, or the material record. They reason backwards from the consequences of past human activities to these activities. This perception of physical evidence as a proxy for past (human) activity is not unique for archaeology; it is also encountered in forensic science, for example during the investigation of a crime scene. This similarity between archaeology and CSI practice has also been noted in scene of crime textbooks. However, archaeology and CSI practice apply dissimilar lines of reasoning when analysing and interpreting past (human) activity. In this chapter the authors explore the applicability of the archaeological paradigm in the CSI practice, the perception of physical evidence, site formation processes and taphonomic transformations (e.g. decay, degradation, corrosion). The chapter starts with a discussion on the fundamentals of CSI, after which it focuses on the archaeological paradigm as relevant to the CSI practice. In part three the archaeological perception of physical evidence as assemblages, with emphasis on the archaeological site formation processes, is discussed. The chapter ends with a conclusion summarising the value of integrating the archaeological and criminalistic frameworks in the CSI practice.

Berger, C.E.H. De waarheidsvinding naar een hoger niveau; Taking the search for truth to a higher level. Inaugural Lecture, Leiden University, February 3rd, 2017. Abstract


Deze oratie werd uitgesproken op 3 februari 2017, in het Academiegebouw van de Universiteit Leiden.

Inaugural lecture by Prof.dr. Charles E.H. Berger on the acceptance of his renewed appointment as professor of Criminalistics by special appointment at Leiden University on February 3rd, 2017.


Berger, C.E.H.; Slooten, K. The LR does not exist, Special issue on measuring and reporting the precision of forensic likelihood ratios. Sci. Justice 2016, 56, 388-391. Abstract


More than 40 years ago, De Finetti warned that probability is a misleading misconception when regarded as objectively existing exterior to the mind. According to De Finetti, probabilities are necessarily subjective, and quantify our belief in the truth of events in the real world. Given evidence of a shared feature of a trace and an accused, we apply this framework to assign an evidential value to this correspondence. Dividing 1 by the objectively existing proportion of the population sharing that feature would give that evidential value - expressed as a likelihood ratio (LR) - only if that proportion were known. As in practice the proportion can only be estimated, this leads some to project their sampling uncertainty - or precision - associated with the estimated proportion onto the likelihood ratio, and to report an interval. Limited data should limit our LR however, because as we will demonstrate the LR is given by what we know about the proportion rather than by the unknown proportion itself. Encapsulating all uncertainty - including sampling uncertainty of the proportion - our LR reflects how much information we have retrieved from the feature regarding the trace's origin, based on our present knowledge. Not an interval but a number represents this amount of information, equal to the logarithm of the LR. As long as we know how to interpret the evidence with a well-defined probabilistic model, we know what our evidence is worth.

Berger, C.E.H. Comments on the views of the National Commission on Forensic Science concerning Statistical Statements in Forensic Testimony. Abstract


I would like to thank the National Commission on Forensic Science for their work on this views document, which I believe can be an important step forward. I wholeheartedly support this effort and I am happy with the direction in which it is moving. I also thank the commission for giving me the opportunity to comment on the views presented. My comments will be on the scientific aspects of the text only, in the hope that these comments will help to further strengthen the views of the commission.

Stoel, R.D.; Kerkhoff, W.; Mattijssen, E.J.A.T.; Berger, C.E.H. Building the research culture in the forensic sciences: Announcement of a double blind testing program. Sci. Justice 2016, 56, 155-156.

Buiskool, M.; Nijs, H.G.T.; Karst, W.A.; Berger, C.E.H. More on the strength of evidence in forensic pathology. Forensic Sci. med. Pathol. 2016, 12, 238-239.

Robertson, B.; Vignaux, G.A.; Berger, C.E.H. Interpreting Evidence: Evaluating Forensic Science in the Courtroom. 2nd edition, 2016, Wiley. cover Abstract


Interpreting Evidence: Evaluating Forensic Science in the Courtroom is a book with an important agenda - to improve the interpretation of expert testimony, evidence, and their accumulation in the courtroom. The suggested methodology is the Bayesian theory of evidence, a system well understood by the mathematical community, but which has yet to gain widespread acceptance in court.
The book is aimed mostly at forensic scientists and people in the legal profession. It presents a well-considered account of why existing methods of presenting testimony are lacking, and how many of these problems are alleviated simply by using Bayesian conditioning. The book's style is easily readable, even to people with lack of formal mathematical background, and yet is sufficiently precise to avoid muddling the issues it discusses. As such, it meets its intended goals: on the one hand, it ought to be educational to forensic scientists, in how to present testimony in a useful manner, that legal professionals can better understand and trust. On the other hand, it introduces legal professionals to the Bayesian method of evidence accumulation, and how it allows for consideration of both forensic and other evidence in context of the case before the court as a whole.
Additionally, parts of the book make an extremely interesting reading to a person with neither forensic science nor the legal training. It contains a simplified introduction to various mechanisms for obtaining forensic evidence in various domains, such as: fingerprints, DNA samples, and blood samples. This makes the book a useful introduction to forensic science and the related legal procedures, for students (or scientists) of other academic fields, and possibly even for the open-minded non-academic.

From a review of the 1st edition (ISBN 0471-9602-68) by Shimony, S.E. in Artificial Intelligence and Law 2001, 9, 215-217.

Mattijssen, E.J.A.T.; Kerkhoff, W.; Berger, C.E.H.; Dror, I.E.; Stoel, R.D. Implementing context information management in forensic casework: Minimizing contextual bias in firearms examination. Sci. Justice 2016, 56, 113-122. Abstract


Managing context information in forensic casework aims to minimize task-irrelevant information while maximizing the task-relevant information that reaches the examiner. A design and implementation of context information management (CIM) is described for forensic firearms and ammunition examination. Guided by a taxonomy of different sources of context information, a flow-chart was constructed that specifies the process of casework examination and context information management. Due to the risk of bias, another examiner may need to be involved when context information management is unsuccessful. Application of such context information management systems does not make a subjective examination objective, but can limit the risks of bias with a minimal investment of time and resources.


De Wolff, T.R.; Kal, A.J.; Berger, C.E.H.; Kokshoorn, B. A probabilistic approach to body fluid typing interpretation: an exploratory study on forensic saliva testing. Law, Probability and Risk 2015, 14, 323-339. Abstract


Identifying specific human body fluids and establishing their presence in traces can be crucial to help reconstructing alleged incidents in criminal cases. It is up to the forensic practitioner to test for the presence of body fluids, interpret the test results and draw scientifically supported conclusions that can be used in a court of law. This study presents a Bayesian network for the interpretation of test results for human saliva based on the presence of human salivary α-amylase. The Bayesian network can be used by forensic practitioners as an exploratory tool to form their expert opinion on the presence or absence of saliva in a trace.

Kerkhoff, W.; Stoel, R.D.; Berger, C.E.H.; Mattijssen, E.J.A.T.; Hermsen, R.; Smits, N.; Hardy, H.J.J. Design and results of an exploratory double blind testing program in firearms examination. Sci. Justice 2015, 55, 514-519. Abstract


In 2010, the Netherlands Forensic Institute (NFI) and the University of Amsterdam (UvA) started a series of tests for the NFI's Firearms Section. Ten cartridge case and bullet comparison tests were submitted by various external parties as regular cases and mixed in the flow of real cases. The results of the tests were evaluated with the VU University Amsterdam (VUA). A total of twenty-nine conclusions were drawn in the ten tests. For nineteen conclusions the submitted cartridge cases or bullets were either fired from the questioned firearm or from one and the same firearm, in tests where no firearm was submitted. For ten conclusions the submitted cartridge cases or bullets were either fired from another firearm than the submitted one or from several firearms, in tests where no firearm was submitted. In none of the conclusions misleading evidence was reported, in the sense that all conclusions supported the true hypothesis. This article discusses the design considerations of the program, contains details of the tests, and describes the various ways the test results were and could be analyzed.

Berger, C.E.H.; Sjerps, M. Conference report: International Conference on Forensic Inference and Statistics 2014. Expertise en Recht 2015, 3, 96-97.

Aitken, C.G.G.; Barrett, A.; Berger, C.E.H.; Biedermann, A.; Champod, C.; Hicks, T.N.; Lucena-Molina, J.; Lunt, L.; McDermott, S.; McKenna, L.; Nordgaard, A.; O’Donnell, G.; Rasmusson, B.; Sjerps, M.J.; Taroni, F.; Willis, S.M.; Zadora, G., ENFSI guideline for evaluative reporting in forensic science, 2015, European Network of Forensic Science Institutes (ENFSI). Abstract


This document provides all reporting forensic practitioners with a recommended framework for formulating evaluative reports and related requirements for the case file. An evaluative report is any forensic report containing an evaluative reporting section. It provides, ultimately, an assessment of the strength to be attached to the findings in the context of alleged circumstances. Although this guideline does not cover the requirements for intelligence, investigative or technical reporting, an evaluative report often also contains elements of technical reporting.

Haraksim, R.; Ramos, D.; Meuwly, D.; Berger, C.E.H. Measuring coherence of computer-assisted likelihood ratio methods. Forensic Sci. Int. 2015, 249, 123-132. Abstract


Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.

Berger, C.E.H.; Vergeer, P.; Buckleton, J.S. A more straightforward derivation of the LR for a database search. Forensic Sci. Int. Genet. 2015, 14, 156-160. Abstract


Matching DNA profiles of an accused person and a crime scene trace are one of the most common forms of forensic evidence. A number of years ago the so-called 'DNA controversy' was concerned with how to quantify the value of such evidence. Given its importance, the lack of understanding of such a basic issue was quite surprising and concerning. Deriving the equation for the likelihood ratio of a DNA database match in a much more direct and simple way is the topic of this paper. As it is much easier to follow it is hoped that this derivation will contribute to the understanding.

Berger, C.E.H.; Taroni, F. Introduction to special ICFIS2014 issue. Law, Probability and Risk 2015, 14, 267. Abstract


Every three years the International Conference on Forensic Inference and Statistics (ICFIS) is held, alternating between Europe and the United States. The conference brings together forensic scientists and people from the legal domain around the theme of rational reasoning under uncertainty. Reasoning under uncertainty raises not only legal and scientific questions of technical difficulty and practical importance, but also fundamental questions in a wide variety of related domains.
In August 2014, the 9th International Conference on Forensic Inference and Statistics was hosted at the University of Leiden in the Netherlands. Many scientific and legal themes were discussed, including the communication between lawyers and experts, quantifying evidential value in different areas of expertise, DNA mixture interpretation, interpretation at activity level, and epidemiological evidence and the law. For more information on the conference see
In this special ICFIS2014 issue of Law, Probability and Risk we are proud to present a selection of papers of general interest covering different areas. We have a paper on forensic epidemiology in the medicolegal setting that provides an introduction for those not familiar with this area, one in which statistics feature as evidence for under-representation of protected groups, and finally two papers on the activity level interpretation of adhesive tape evidence and of body fluid test results respectively.


Vergeer, P.; Bolck, A.; Peschier, L.J.C.; Berger, C.E.H.; Hendrikse, J.N. Likelihood ratio methods for forensic comparison of evaporated gasoline residues. Sci. Justice 2014, 54, 401-411. Abstract


In the investigation of arson, evidence connecting a suspect to the fire scene may be obtained by comparing the composition of ignitable liquid residues found at the crime scene to ignitable liquids found in possession of the suspect. Interpreting the result of such a comparison is hampered by processes at the crime scene that result in evaporation, matrix interference, and microbial degradation of the ignitable liquid. Most commonly, gasoline is used as a fire accelerant in arson. In the current scientific literature on gasoline comparison, classification studies are reported for unevaporated and evaporated gasoline residues. In these studies the goal is to discriminate between samples of several sources of gasoline, based on a chemical analysis. While in classification studies the focus is on discrimination of gasolines, for forensic purposes a likelihood ratio approach is more relevant. In this work, a first step is made towards the ultimate goal of obtaining numerical values for the strength of evidence for the inference of identity of source in gasoline comparisons. Three likelihood ratio methods are presented for the comparison of evaporated gasoline residues (up to 75% weight loss under laboratory conditions). Two methods based on distance functions and one multivariate method were developed. The performance of the three methods is characterized by rates of misleading evidence, an analysis of the calibration and an information theoretical analysis. The three methods show strong improvement of discrimination as compared with a completely uninformative method. The two distance functions perform better than the multivariate method, in terms of discrimination and rates of misleading evidence.

Berger, C.E.H.; Stoel, R. Letter to the Editor (Response to Champod). Sci. Justice 2014, 54, 510-511.

Liwicki, M.; Malik, M.I.; Berger, C.E.H. Towards a shared conceptualization for automatic signature verification. In: Pirlo, G.; Impedovo, D.; Fairhurst, M. (Eds.), Advances in Digital Handwritten Signature Processing, 2014, 65-80. Chapter description

Chapter description

This chapter is an effort towards the development of a shared conceptualization regarding automatic signature verification especially between the Pattern Recognition (PR) and Forensic Handwriting Examiners (FHEs) communities. This is required because FHEs require state-of-the-art PR systems to incorporate them in forensic casework but so far most of these systems are not directly applicable to such environments. This is because of various differences, right from terminology to evaluation, in how the signature verification problem is approached in the two said communities. This chapter, therefore, addresses three major areas where the two communities differ and suggest possible solutions to their effect. First, it highlights how signature verification is taken differently in the above mentioned communities and why this gap is increasing. Various factors that widen this gap are discussed with reference to some of the recent signature verification studies and probable solutions are suggested. Second, it discusses the state-of-the-art evaluation and its problems as seen by FHEs. The real evaluation issues faced by FHEs, when trying to incorporate automatic signature verification systems in their routine casework, are presented. Third, it reports a standardized evaluation scheme capable of fulfilling the requirements of both PR researchers and FHEs and provides a practical exemplar of its usage.

Stoel, R.; Berger, C.E.H.; Kerkhoff, W.; Mattijssen, E.; Dror, I. Minimizing contextual bias in forensic casework. In: Hickman, M. (ed) Forensic Science and the Administration of Justice, 2013, SAGE Publications, 67-86. Book description

Book description

One of the central themes of the book is that social science research can inform us about the utility of forensic science with respect to both the criminal investigative and adjudicative processes. A second theme of the book concerns questions about the scientific underpinnings of forensic services, including the accuracy and scientific methods of certain forensic disciplines as well as the influence of externalities. A final theme explores the role of the crime laboratory in the American justice system and how it is evolving, in concert with technological advancements as well as changing demands and competing pressures for laboratory resources.


Malik, M.I; Liwicki, M.; Alewijnse, L.; Blumenstein, M.; Berger, C.E.H.; Stoel, R.; Found, B. (Eds.), Proceedings of the 2nd ICDAR International Workshop on Automated Forensic Handwriting Analysis, Vol. 1022, CEUR-WS, 2013.

Berger, C.E.H. Objective ink color comparison through image processing and machine learning. Sci. Justice 2013, 53, 55-59. Abstract


Making changes or additions to written entries in a document can be profitable and illegal at the same time. A simple univariate approach is first used in this paper to quantify the evidential value in color measurements for inks on a document coming from a different or the same source. Graphic, qualitative discrimination is then obtained independently by applying color deconvolution image processing to document images, with parameters optionally optimized by support vector machines (SVM), a machine learning method. Discrimination based on qualitative results from image processing is finally compared to the quantitative results of the statistical approach. As color differences increase, optimized color deconvolution achieves qualitative discrimination when the statistical approach indicates evidence for the different source hypothesis.


Berger, C.E.H.; Ramos, D. Objective paper structure comparison: Assessing comparison algorithms. Forensic Sci. Int. 2012, 222, 360-367. Abstract


More than just being a substrate, paper can also provide evidence for the provenance of documents. An earlier paper described a method to compare paper structure, based on the Fourier power spectra of light transmission images. Good results were obtained by using the 2D correlation of images derived from the power spectra as a similarity score, but the method was very computationally intensive. Different comparison algorithms are evaluated in this paper, using information theoretical criteria. An angular invariant algorithm turned out to be as effective as the original one but 4 orders of magnitude faster, making the use of much larger databases possible.

Sjerps, M.; Berger, C.E.H. How clear is transparent? Reporting expert reasoning in legal cases. Law, Probability and Risk 2012, 11, 317-329. Abstract


Experts providing evidence in legal cases are universally recommended to be transparent, particularly in their reasoning, so that legal practitioners can critically check whether the conclusions are adequately supported by the results. However, when exploring the practical meaning of this recommendation it becomes clear that people have different things in mind. The UK appeal court case R v T painfully exposes the different views. In this article we argue that there can be a trade-off between clarity and transparency, and that in some cases it is impossible for the legal practitioner to be able to follow the expert's reasoning in full detail because of the level of complexity. All that can be expected in these cases is that the legal practitioner is able to understand the reasoning up to a certain level. We propose that experts should only report the main arguments, but must make this clear and provide further details on request. Reporting guidelines should address the reasoning in more detail. Legal practitioners and scientists should not be telling each other what to do in the setting of a legal case, but in other settings more discussion will be beneficial to both. We see the likelihood ratio framework and Bayesian networks as tools to promote transparency and logic. Finally, we argue that transparency requires making clear whether a conclusion is a consensus and reporting diverging opinions on request.

Berger, C.E.H.; Sjerps, M. Reaction to Hamer and Thompson in LPR. Law, Probability and Risk 2012, 11, 373-375. Abstract


The Hamer contribution reveals a lot of the common misperceptions surrounding the issues in R v T. While the paper risks adding to the confusion of the uninformed reader, we will use it to list and address such misperceptions in this reaction. We acknowledge that the author will in some cases have described misconceptions held by others rather than his own, although this is not always clear.

Berger, C.E.H.; Buckleton, J.S.; Champod, C.; Evett, I.W.; Jackson, G. Response to Jamieson regarding “More on the Bayesian Approach and the LR”. Sci. Justice 2012, 52, 203.

Robertson, B.; Vignaux, G.A.; Berger, C.E.H. Discussion on the paper by Neumann, Evett and Skerrett: Quantifying the weight of evidence from a forensic fingerprint comparison. J. R. Statist. Soc. A 2012, 175, 407-408.

Berger, C.E.H.; Robertson, B.; Vignaux, G.A. Interpreting scientific evidence, Chapter 28 in Expert Evidence, eds. Freckelton & Selby. Abstract


This Expert Evidence chapter deals with the central matters relating to the interpretation of forensic scientific evidence. Expert scientific evidence usually involves the forensic scientist making an observation on some aspect of the case and, based on past experience, reporting inferences to the court. The task of the forensic scientist and of the lawyers is to see precisely what inferences can and cannot legitimately be drawn from such an observation. There is a simple and logical solution to these questions that deals with many of the difficulties courts have perceived with expert evidence.


Berger, C.E.H.; Buckleton, J.S.; Champod, C.; Evett, I.W.; Jackson, G. Response to Faigman et al. Sci. Justice 2011, 51, 215.

Liwicki, M.; Blumenstein, M.; Found, B.; van den Heuvel, C.E.; Berger, C.E.H.; Stoel, R. (Eds.), Proceedings of the 1st International Workshop on Automated Forensic Handwriting Analysis, Vol. 768, CEUR-WS, 2011.

Liwicki, M.; Malik, M.I.; van den Heuvel, C.E.; Chen, X.; Berger, C.E.H.; Stoel, R.; Blumenstein, M.; Found, B. Signature Verification Competition for Online and Offline Skilled Forgeries (SigComp2011), in: 11th Int. Conf. on Document Analysis and Recognition 2011, 1480-1484. Abstract


The Netherlands Forensic Institute and the Institute for Forensic Science in Shanghai are in search of a signature verification system that can be implemented in forensic casework and research to objectify results. We want to bridge the gap between recent technological developments and forensic casework. In collaboration with the German Research Center for Artificial Intelligence we have organized a signature verification competition on datasets with two scripts (Dutch and Chinese) in which we asked to compare questioned signatures against a set of reference signatures. We have received 12 systems from 5 institutes and performed experiments on online and offline Dutch and Chinese signatures. For evaluation, we applied methods used by Forensic Handwriting Examiners (FHEs) to assess the value of the evidence, i.e., we took the likelihood ratios more into account than in previous competitions. The data set was quite challenging and the results are very interesting.

Hermsen, R.; Stoel, R.D.; Berger, C.E.H. Ingezonden brief, Expertise en Recht 2011, 4, 160.

Berger, C.E.H.; Buckleton, J.S.; Champod, C.; Evett, I.W.; Jackson, G. Evidence evaluation: a response to the Appeal Court judgment in R v T. Sci. Justice 2011, 51, 43-49. Abstract


This is a discussion of a number of issues that arise from the recent judgment in R v T. Although the judgment concerned with footwear evidence, more general remarks have implications for all disciplines within forensic science. Our concern is that the judgment will be interpreted as being in opposition to the principles of logical interpretation of evidence. We re-iterate those principles and then discuss several extracts from the judgment that may be potentially harmful to the future of forensic science. A position statement with regard to evidence evaluation, signed by many forensic scientists, statisticians and lawyers, has appeared in this journal and the present paper expands on the points made in that statement.

Berger, C.E.H.; Buckleton, J.S.; Champod, C.; Evett, I.W.; Jackson, G. Expressing evaluative opinions: A position statement. Sci. Justice 2011, 51, 1-2. Abstract

The judgment of the Court of Appeal in R v T raises several issues relating to the evaluation of scientific evidence that, we believe, require a response. We, the undersigned, oppose any response to the judgment that would result in a movement away from the use of logical methods for evidence evaluation. A paper in this issue of the Journal re-iterates logical principles of evidence interpretation that are accepted by a broad range of those who have an interest in forensic reasoning.

Robertson, B.; Vignaux, G.A.; Berger, C.E.H. Extending the confusion about Bayes. Modern Law Review 2011, 74, 444-455. Abstract


In R v T [2010] EWCA Crim 2439, [2011] 1 Cr App Rep 85, the Court of Appeal indicated that ‘mathematical formulae’, such as likelihood ratios, should not be used by forensic scientists to analyse data where firm statistical evidence did not exist. Unfortunately, when considering the forensic scientist's evidence, the judgment consistently commits a basic logical error, the ‘transposition of the conditional’ which indicates that the Bayesian argument has not been understood and extends the confusion surrounding it. The judgment also fails to distinguish between the validity of the relationships in a formula and the precision of the data. We explain why the Bayesian method is the correct logical method for analysing forensic scientific evidence, how it works and why ‘mathematical formulae’ can be useful even where firm statistical data is lacking.

Sjerps, M.; Berger, C.E.H. Het Bayesiaanse model biedt een helder zicht op een complexe werkelijkheid; The Bayesian approach offers a clear view of a complex reality. NFI white paper, May 2011. Abstract


The evaluation of criminal evidence is of great interest to both the general public and scientists. Those scientists come from different disciplines, such as legal psychology, philosophy, statistics and forensic science. As befits true scientists, they are very critical. The evaluation of forensic evidence by the Netherlands Forensic Institute (NFI) is therefore closely followed by fellow scientists.

De evaluatie van strafrechtelijk bewijs staat sterk in de belangstelling van zowel het algemene publiek als van wetenschappers. Die wetenschappers zijn afkomstig uit verschillende vakgebieden, zoals de rechtspsychologie, de filosofie, de statistiek, en de forensische wetenschap. Zoals ware wetenschappers betaamt zijn zij daarbij zeer kritisch. Zo wordt ook de evaluatie van forensisch bewijs door het Nederlands Forensisch Instituut (NFI) nauwlettend door collega-wetenschappers gevolgd.


Klein, M.E.; Aalderink, B.J.; Berger, C.E.H.; Herlaar, K.; Koeijer, J.A. de Quantitative hyperspectral imaging technique for measuring material degradation effects and analyzing TLC plate traces. J. of the ASQDE 2010, 13, 71-81. Abstract


In forensic document analysis, multi-spectral reflectance and luminescence imaging techniques are routinely used for distinguishing inks and for enhancing the legibility of faint or invisible writing. The transition from conventional, qualitative spectral imaging to quantitative hyperspectral imaging (QHSI) made possible by the SENTINEL instrument facilitates and enhances the applicability of the technique to less common tasks. Several simple demonstration experiments were carried out to illustrate how the QHSI technique can be used in 2 application areas, the study of degradation effects in materials and the analysis of thin layer chromatography (TLC) plates. As examples for the 1st application area, the changes in the reflectance and luminescence characteristics of paper and writing induced by exposure to sunlight and strong UV light were measured with the QHSI instrument. As an example for the 2nd application area, the SENTINEL instrument was used to measure a TLC plate with ink samples. Based on the large number of calibrated reflectance and luminescence images, one can generate false-color images that facilitate the visual comparison of the positions and intensity of bands. A more detailed analysis is possible by extracting numeric cross-section data along the different sample traces.

Berger, C.E.H. Criminalistiek is terugredeneren; Forensic science is reasoning backwards. Nederlands Juristenblad 2010, 85, 784-789. Abstract


Science plays an increasing role in criminal law, and both are rightly held up to higher standards. The awareness that scientists and lawyers will need to find each other more often to reach a higher level is also increasing. Logically correct reasoning and concluding are indispensable at that higher level, especially when uncertainty is involved. This paper describes how logically correct conclusions are given in forensic reports, and how the reader can deal with this.

De wetenschap speelt een toenemende rol in het strafrecht, en terecht worden aan beide steeds hogere eisen gesteld. Het besef dat wetenschappers en juristen meer toenadering moeten zoeken om een hoger niveau te kunnen bereiken, neemt ook toe. Logisch correct redeneren en concluderen is onontbeerlijk op dat hogere niveau, zeker wanneer onzekerheid een rol speelt. Dit artikel beschrijft hoe er in forensische rapportages logisch correct geconcludeerd wordt, en hoe de lezer hiermee om kan gaan.

Berger, C.E.H.; Meuwly, D. Logically correct concluding and rational reasoning in evidence evaluation. Sci. Justice 2010, 50, 33. Abstract



This presentation deals with the implementation of logically correct, balanced, robust and transparent forensic reporting. The Netherlands Forensic Institute (NFI) produces about 35,000 reports per year in 43 fields of expertise. About 20,000 of those reports are complete statements including a forensic interpretation and conclusion. The improvement of the quality of the reporting is an ongoing activity of the NFI, but in the last 3 years the authors' efforts towards transparency were focused on rendering the conclusions of the forensic reports more uniform, transparent, balanced, and logically correct. For the following years we envisage to improve the transparency of forensic reasoning, using Bayesian Networks (BNs) for explicit and rational reasoning. We will discuss the implications for reporting, casework and R and D, as well as internal and external education aspects. A very short introduction to Bayesian Networks will be given, and the improvement efforts will be related to the recommendations for improvement of forensic science in the United States by the National Academy of Science.

Logically correct conclusions

At the NFI, the first step towards transparent forensic conclusions consists in reporting logically correct forensic conclusions. This requires defining a correct set of hypotheses to be considered, and estimating the ratio of the probabilities of the analytical findings, when one or the other hypothesis is taken to be true (Likelihood Ratios). The estimation of those probabilities should be quantitative where possible (objective estimation) and when this is not feasible verbal scales can be used (subjective estimation). Gathering more empirical data to support the estimations requires a (long term) R and D effort. Finally, the verbal scales will need to be calibrated to quantitative likelihood ratios.

Transparent rational reasoning

The next phase will include promoting the use of Bayesian Networks (BNs). BNs can make the structure of the forensic reasoning process and the conditional probabilities involved explicit. We foresee the use of BNs for the interpretation of the evidence at the activity level and for the interpretation of combined evidence, but also for case pre-assessment and to assess in which part of the forensic processes R and D is most needed. The practical implementation of such a tool clearly will necessitate an effort in terms of education, for developers of BNs as well as BN users in casework, and readers of reports.

Berger, C.E.H. Het juiste gewicht in de schaal; The right weight in the scales of justice. Ars Aequi 2010, 499-501. Abstract


‘Measuring is knowing’ is the common expression, and measurement can indeed provide important information. But to draw conclusions from measurement results you will almost always need more than that. Inferences are not only based on those results but also on logic and empirical data. This should indicate to what extent the results are evidence for the truth of relevant hypotheses in a case. When it comes to truth finding, the scientific side of evidence is essential for putting the right weight in the scales.

‘Meten is weten’, luidt de gevleugelde kreet, en meten kan inderdaad belangrijke informatie opleveren. Alleen zal voor het trekken van conclusies uit meetresultaten vrijwel altijd meer nodig zijn. Gevolgtrekkingen niet alleen op die meetresultaten gebaseerd maar ook op logica en populatiegegevens. Daarmee kan worden aangegeven in welke mate de resultaten bewijs vormen voor relevante hypothesen in een zaak. Als het gaat om waarheidsvinding is de wetenschappelijke kant van bewijs essentieel om het juiste gewicht in de schaal te leggen.

Berger, C.E.H.; Aben, D. Bewijs en overtuiging: Rationeel redeneren sinds Aristoteles; Rational reasoning since Aristotle. Expertise en Recht 2010, 2, 52-56. Abstract


Judges sometimes say they do their work based on Aristotle, experience and intuition. Gut feeling, so to speak. Science would thus have little to do with this, and have little to offer to the judge that reaches his verdicts by using his experience. Science however, has evolved since the times of Aristotle (384-322 BC). This is no different for forensic science. Can the judge do without?

Rechters veronderstellen soms hun werk te doen op basis van Aristoteles, ervaring en intuïtie. ‘Fingerspitzengefühl’ dus. Wetenschap zou hiermee weinig te maken hebben en ook weinig kunnen bieden aan de rechter die met behulp van ervaring tot zijn uitspraken komt. De wetenschap heeft echter niet stilgestaan sinds de tijden van Aristoteles (384-322 v. Chr.). Dat is voor de forensische wetenschap niet anders. Kan de strafrechter nog zonder?

Berger, C.E.H.; Aben, D. Bewijs en overtuiging: Redeneren in de rechtszaal. Expertise en Recht 2010, 3, 86-90. Abstract


In het tweede deel van dit drieluik komen wij te spreken over de toepassing van het Bayesiaanse redeneerschema in de strafrechtspraak. Aan de hand van enkele voorbeelden zullen wij trachten te demonstreren wat het nut is van de toepassing van dit redeneerschema bij de analyse van het bewijsmateriaal, zonder (veel) gebruik te maken van cijfers. De kern van ons betoog is namelijk dat het Bayesiaanse redeneerschema inzicht kweekt in de kracht en relevantie van het bewijsmateriaal, dan wel het gebrek daaraan, ook zonder de bewijskracht en de ‘odds’ te kwantificeren.

Berger, C.E.H.; Aben, D. Bewijs en overtuiging: Een helder zicht op valkuilen. Expertise en Recht 2010, 5/6, 159-165. Abstract


Met dit derde deel sluiten we het drieluik ‘Bewijs en overtuiging’ af. Als u de eerste twee delen (nog) niet gelezen hebt, kunt u dit deel niettemin als een zelfstandig artikel lezen, doordat we het Bayesiaans redeneerschema voor u samenvatten. Gegeven het belang van a-priorikansverhoudingen voor het correct gebruik van het redeneerschema besteden we hieraan enige aandacht. Daarna zullen we ingaan op een aantal veel voorkomende fouten in het redeneren met bewijs, en het risico van het missen van de samenhang tussen bewijsmiddelen. Afsluitend spreken we de verwachting uit dat het redeneerschema een waardevolle bijdrage zal leveren aan een verbetering van het redeneren met onzekere informatie.

Berger, C.E.H.; Sjerps, M.J. Reactie op 'Begrijpt de rechter wat ik bedoel?' Ars Aequi 2010, 816. Abstract


Met deze reactie willen we wijzen op enkele onjuistheden die in het artikel ‘Begrijpt de rechter wat ik bedoel?’ door professor Broeders in Ars Aequi van juli/augustus dit jaar zijn geslopen en enkele kanttekeningen plaatsen.

Stoel, R.; Berger, C.E.H.; van den Heuvel, E.; Fagel, W. De wankele kritiek op het forensisch handschriftonderzoek; The shaky criticism of forensic handwriting analysis. Nederlands Juristenblad 2010, 2537-2541. Abstract


Handwriting analysis is a complicated form of comparative forensic examination with an important subjective component. The underlying basic principle is not that a handwriting examiner would be able to link handwriting to a unique source. The examination is aimed at an assessment of the evidential value.

Handschriftonderzoek is een gecompliceerde vorm van vergelijkend forensisch onderzoek met een belangrijke subjectieve component. Het onderliggende basisprincipe is niet dat een handschriftonderzoeker in staat zou zijn om een handschrift aan een unieke bron te koppelen. Het onderzoek is erop gericht tot een inschatting van de bewijskracht te komen.


Berger, C.E.H. Objective paper structure comparison through processing of transmitted light images. Forensic Sci. Int. 2009, 192, 1-6. Abstract


A method for the comparison of paper structure using light transmission images and frequency analysis was developed. The resolution of the light transmission images and the algorithm for the feature extraction were greatly improved to enhance the visibility of peaks in the 2D power spectrum that results from frequency analysis. A comparison method based on correlation measures how well the spectra match as a function of the orientation of the paper, yielding an objective and quantitative measure of similarity between 0 and 1. A technical validation was carried out with 25 different papers showing the potential of this method with common copy papers. Finally, the method was applied in a case.

Berger, C.E.H. Inference of identity of source using univariate and bivariate methods. Sci. Justice 2009, 49, 265-271. Abstract


In this study we explore the inference of identity of source using a two-dimensional feature vector. As an example, we study the use of the Bayesian framework for the estimation of the value of evidence of color measurements for identity of source of blue ballpoint pen inks. Univariate as well as bivariate analyses are carried out for color data that was acquired with a flatbed scanner. While this might not be the best method to discriminate inks, we will use it as an example to estimate what the value of the evidence is, however low or high it may be. It is hoped that this exercise is instructional, as a similar approach can readily be applied in other situations.

Berger, C.E.H.; Veenman, C.J. Color Deconvolution and Support Vector Machines. In Lecture Notes in Computer Science; Geradts, Z.J.M.H.; Franke, K.Y.; Veenman, C.J., Eds.; Springer-Verlag: Berlin, 2009, 5718; 174-180. Abstract


Methods for machine learning (support vector machines) and image processing (color deconvolution) are combined in this paper for the purpose of separating colors in images of documents. After determining the background color, samples from the image that are representative of the colors to be separated are mapped to a feature space. Given the clusters of samples of either color the support vector machine (SVM) method is used to find an optimal separating line between the clusters in feature space. Deconvolution image processing parameters are determined from the separating line. A number of examples of applications in forensic casework are presented.


Berger, C.E.H.; de Koeijer, J.A.; Glas W.; Madhuizen, H. Color Separation in Forensic Image Processing. J. Forensic Sci. 2006, 51, 100-102. Abstract


In forensic image processing, it is often important to be able to separate a feature from an interfering background or foreground, or to demonstrate colors within an image to be different from each other. In this study, a color deconvolution algorithm that could accomplish this task is described, and it is applied to color separation problems in document and fingerprint examination. Subtle color differences (sometimes invisible to the naked eye) are found to be sufficient, which is demonstrated successfully for several cases where color differences were shown to exist, or where colors were removed from the foreground or background. The software is available for free in the form of an Adobe® Photoshop®-compatible plug-in.

de Koeijer, J.A.; Berger, C.E.H.; Glas W.; Madhuizen, H. Gelatine Lifting, a Novel Technique for the Examination of Indented Writing. J. Forensic Sci. 2006, 51, 908-914. Abstract


The limitations of the examination of indented writing impressions using electrostatic detection are often paper related. Paper types such as glossy paper, paper of high basis weight, and lithography or gravure-printed papers often give rise to problems resulting in a decrease in sensitivity or a lack of detection altogether. In this paper, a novel technique for the examination of indented writing is presented, which is in a sense complimentary to the technique of electrostatic detection as it is especially suitable for glossy-coated and printed paper types and can in some instances also deal with paper types of higher basis weight. Indented writing grooves will normally contain more particles than the surrounding nonindented areas due to damage of the surface layer resulting in a build-up of filler powder. The method presented uses black gelatine lifter slabs to lift the paper dust image off the surface of the paper. This image can quite easily be photographed using near-to-coaxial lighting.

The gelatine lifting method outperforms oblique lighting for the detection of indented writing and is almost as sensitive as electrostatic detection if compared on the types of paper where both perform well. The main advantage of this new technique is, however, that it is especially suitable for those types of paper where electrostatic detection fails and is therefore a welcome addition to the range of methods available to a forensic document examiner for the examination of indented writing.

Bergeron, V.; Berger, C.E.H.; Betterton, M.D. Controlled Irradiative Formation of Penitentes. Phys. Rev. Lett. 2006, 96, 098502-1 - 098502-4. See also: "Spiked Ice". Abstract


Spike-shaped structures are produced by light-driven ablation in very different contexts. Penitentes 1-4 m high are common on Andean glaciers, where their formation changes glacier dynamics and hydrology. Laser ablation can produce cones 10-100μm high with a variety of proposed applications in materials science. We report the first laboratory generation of centimeter-scale snow and ice penitentes. Systematically varying conditions allows identification of the parameters controlling the formation of ablation structures. We demonstrate that penitente initiation and coarsening require cold temperatures, so that ablation leads to sublimation. Once penitentes have formed, further growth of height can occur by melting. The penitentes initially appear as small structures (3 mm high) and grow by coarsening to 1-5 cm high. Our results are an important step towards understanding ablation morphologies.


Berger, C.E.H.; de Koeijer, J.A.; Glas W.; Madhuizen, H. Linking inkjet printing to a common digital source document. J. of the ASQDE 2005, 8, 91-94. Abstract


Minimal differences in a digital source document will drastically change the error diffusion dot pattern of an inkjet print. This study explains and demonstrates this effect and shows how this particular property of the error diffusion screening method can be used to link inkjet-printed documents to a common digital source. The results of the study were applied in a case.


Berger, C.E.H.; Desbat, B.; Kellay, H.; Turlet, J-M.; Blaudez, D. Water Confinement Effects in Black Soap Films. Langmuir 2003, 19, 1-5. Abstract


Water confined in the ultrathin interstitial core of black soap films has been studied by infrared and Raman spectroscopies. Lowering the water core thickness below 1 nm induces spectral modifications in the multicomponent O-H stretching band which can be associated with environment changes for water molecules. Differences observed between nonionic and ionic surfactants allow us to distinguish the sole effect of confinement from the combined effects of confinement and ionic strength. Both trends correlate with previous experimental observations done on different surfactant solutions and with results from numerical simulations.

Berger, C.E.H.; Bergeron, V.; Desbat, B.; Blaudez, D.; Kellay, H.; Turlet, J-M. Bilayer in a Liquid Self-Supported Film. Langmuir 2003, 19, 8615-8617. Abstract


The drainage of vertical soap films formed from nonionic surfactant solutions of pentaethylene glycol monododecyl ether (C12E5) was studied using Fourier transform infrared spectroscopy. At relatively low surfactant concentrations in the bulk solution (but above the critical micelle concentration), a transient surfactant bilayer was observed within these soap films. The expulsion of water and of a surfactant bilayer from the film was followed as a function of time. The bilayer found within the film displayed a much higher degree of molecular order than the two outer monolayers of the soap film. This study can serve as a model for similar studies on biomembranes.


Berger, C.E.H.; Greve, J. Differential SPR immunosensing. Sensor. Actuat. B-Chem. 2000, 63, 103-108. Abstract


In this work we describe a surface plasmon resonance SPR sensor with a differential detection of the SPR angle, and demonstrate it. The angle of incidence is modulated by a simple piezo-electric actuator, and the reflectance signal is measured with a lockin-amplifier. When the conditions for SPR are fulfilled, the differential signal is zero. The shift of the resonance conditions can be measured as an increase of the differential signal, or using feedback on the angle of incidence. The sensor will be demonstrated by monitoring an adsorption and an immunoreaction at the sensor surface with sub-mdeg. resolution. The modulated incident beam can also be scanned past a number of sensing areas, making multichannel measurements possible, as will be demonstrated.


Berger, C.E.H.; Kooyman, R.P.H.; Greve, J. Surface plasmon propagation near an index step. Opt. Commun. 1999, 167, 183-189. Abstract


Propagation effects of surface plasmons on the surface plasmon microscopy SPM image of an area around the edge of a cover layer were studied as a function of the wavelength. A phenomenological model that describes these effects of surface plasmon propagation on the observed reflectance is presented. Theoretical and experimental results for wavelengths ranging from 560 to 660 nm for a 50 nm silver layer with 30 nm thick SiO2 pattern on top were compared and found to agree quite well.


Berger, C.E.H.; Beumer, T.A.M.; Kooyman, R.P.H.; Greve, J. Surface plasmon resonance multisensing. Anal. Chem. 1998, 70, 703-706. Abstract


We have demonstrated the feasibility of surface plasmon resonance (SPR) multisensing by monitoring four separate immunoreactions simultaneously in real time using a multichannel SPR instrument. A plasmon carrying gold layer, onto which a four-channel flow cell was pressed, was imaged at a fixed angle of incidence. First, the four channels were coated with antibodies and then the flow cell was turned by 90° such that the flow channels overlapped the areas coated in the first step. In that geometry, antigens were applied to the different antibodies on the surface. Thus, all antibody-antigen combinations can be measured in a two-dimensional array of sensor surfaces in real time. Our results do correlate with expected immunologic specificity. The emphasis will be on presenting this method to obtain data on immunosystems and not as much on the assessment of biological activity.


PhD Thesis: Berger, C.E.H. SPR and AFM Experiments on Biological Monolayers ISBN 90-3650864-9. Abstract


Surface plasmon resonance (SPR) is the phenomenon of resonantly excited collective electron oscillations at a metal surface. This can be achieved by using the evanescent field of light reflecting in a prism to excite SPR in a thin metal layer on top of that prism. If resonance conditions are fulfilled a minimum of light is reflected. The conditions for the reflecting light for which resonance occurs depend very much on the presence of optical structures in the evanescent field. Therefore, SPR can be used to image lateral heterogeneities in those structures and monitor changes in time. A general introduction into reflectometric methods for the study of surfaces and thin layers is given in the first chapter. Fresnel’s theory can be used to describe these methods. In the second chapter SPR is used to detect immune reactions at surfaces. In this case the optical structure within the evanescent field changes in time through the formation of immune complexes. Two types of SPR sensors are described. The first one uses a differential detection technique, and is demonstrated with single and multi-channel measurements. The second one operating with a fixed angle of incidence, is used to monitor a two-dimensional array of sensor surfaces. A model describing the propagation of surface plasmons near an indexstep is described in the third chapter. Using this model one can estimate the lateral resolution obtainable with surface plasmon microscopy (SPM), where the reflecting light is imaged with a microscope objective. The fourth chapter describes an SPM setup and a number of methods used to enhance the lateral resolution. An introduction into the Langmuir-Blodgett (LB) method for the preparation of monolayers is given in the fifth chapter. The SPM is used to image domains in phase-separated lipid monolayers. In the sixth chapter a different method is used for the imaging of phase-separated monolayers. Adhesion atomic force microscopy (adhesion AFM) is used to measure the adhesive interaction between the layer and the very sharp AFM tip for every point in the resulting image. This image shows the lateral distribution within the layer of chemical groups that are exposed to the tip.


Berger, C.E.H.; Werf, K.O. van der; Kooyman, R.P.H.; Grooth, B.G. de; Greve, J. Functional Group Imaging by Adhesion AFM Applied to Lipid Monolayers. Langmuir 1995, 11, 4188-4192. Abstract


Recently developed adhesion atomic force microscopy was used as a technique to map the spatial arrangement of chemical functional groups at a surface with a lateral resolution of 20 nm. The ratio of the adhesion forces for different functional groups can be compared with values determined from the known surface energies. This concept was demonstrated by mapping the adhesive interaction of domains in a phase-separated lipid monolayer with the AFM tip. The ratio of the adhesion forces for both phases corresponds with the theoretical number for the CH2 and CH3 groups.


Berger, C.E.H.; Kooyman, R.P.H.; Greve, J. Resolution in surface plasmon microscopy. Rev. Sci. Instrum. 1994, 65, 2829-2836. Abstract


In this article we demonstrate how to obtain the ultimate lateral resolution in surface plasmon microscopy (SPM) (diffraction limited by the objective). Surface plasmon decay lengths are determined theoretically and experimentally, for wavelengths ranging from 531 to 676 nm, and are in good agreement. Using these values we can determine for each particular situation which wavelength should be used to obtain an optimal lateral resolution, i.e., where the plasmon decay length does not limit the resolution anymore. However, there is a trade‐off between thickness resolution and lateral resolution in SPM. Because of the non‐optimal thickness resolution, we use several techniques to enhance the image acquisition and processing. Without these techniques the use of short wavelengths results in images where the contrast has vanished almost completely. In an example given, a 2.5 nm SiO2 layer on a gold layer is imaged with a lateral resolution of 2 μm, and local reflectance curves are measured to determine the layer thickness. The SPM image is compared with an atomic force microscopy image of the same object. We obtain a 3 μm resolution when thickness differences within a lipid monolayer are imaged and measured.

You can also find Charles on: