• Nem Talált Eredményt

Economics of Security and Privacy

N/A
N/A
Protected

Academic year: 2022

Ossza meg "Economics of Security and Privacy"

Copied!
207
0
0

Teljes szövegt

(1)

Economics of Security and Privacy

Buttyán, Levente

Félegyházi, Márk

(2)

Economics of Security and Privacy

írta Buttyán, Levente és Félegyházi, Márk Publication date 2015

Szerzői jog © 2015 Buttyán Levente, Félegyházi Márk

(3)

Tartalom

Economics of Security and Privacy ... 1

1. 1 Introduction ... 1

1.1. First look - quick overview ... 1

1.2. Security ... 1

1.3. Our definition ... 1

1.4. More definitions ... 2

1.5. More on attacks ... 2

1.6. More on attacks ... 2

1.7. Examples of attacks ... 3

1.8. Communication security - a simple view ... 3

1.9. Communication security services ... 4

1.10. Communication security services ... 4

1.11. Placement of security services ... 4

1.12. Security concepts summary ... 5

1.13. Internet communication ... 5

1.14. Internet communication ... 6

1.15. When things go wrong ... 7

1.16. When things go wrong ... 9

1.17. The engineering solution ... 10

1.18. Information security simplified ... 11

1.19. Information security: the big picture ... 11

1.20. Information security: the big picture ... 12

1.21. Information security: the big picture ... 13

1.22. Information security: the big picture ... 13

1.23. Information security: the big picture ... 14

1.24. Information security: the big picture ... 16

1.25. Information security: the big picture ... 17

1.26. Information security: the big picture ... 18

1.27. Our journey ... 19

1.28. Reading for next time ... 20

1.29. Information security incentives ... 20

1.30. Information security incentives ... 21

1.31. Misaligned incentives ... 21

1.32. Misaligned incentives: DDoS ... 22

1.33. Misaligned incentives: DDoS ... 23

1.34. Misaligned incentives: DDoS ... 24

1.35. Possible solution ... 25

1.36. IT markets ... 26

1.37. Reminder: attacker's advantage ... 26

1.38. Asymmetric info: Lemon markets ... 26

1.39. Economics of information security ... 26

1.40. Reading for next time ... 27

2. 2 Introduction to microeconomics ... 27

2.1. Information security: the big picture ... 27

2.2. Information security ... 28

2.3. Why economics? ... 29

2.4. Why economics? ... 29

2.5. Game theory ... 30

2.6. Chapter outline ... 30

2.7. What is a game? ... 30

2.8. Game: formal definition ... 31

2.9. Classification of games ... 31

2.10. Chapter outline ... 31

2.11. Prisoner's Dilemma ... 32

2.12. Normal (strategic) form ... 32

2.13. Strict dominance ... 33

(4)

2.14. Iterative strict dominance ... 33

2.15. A look into the crystal ball ... 34

2.16. Prisoner's Dilemma with a boss ... 34

2.17. PD with a boss in normal form ... 34

2.18. Weak dominance ... 35

2.19. Iterative weak dominance ... 35

2.20. Nash equilibrium ... 35

2.21. Finding Nash equilibria ... 35

2.22. Nash equilibrium - formal definition ... 36

2.23. Finding NE in a complex game ... 36

2.24. The Chicken game ... 36

2.25. The Chicken game ... 37

2.26. Mixed strategy Nash equilibrium ... 37

2.27. Coordination: The Chicken game in practice ... 38

2.28. Head or tail game ... 38

2.29. Nash Theorem ... 38

2.30. Efficiency of Nash equilibria ... 39

2.31. Finding Pareto-optimality in a complex game ... 39

2.32. Pareto-optimality in the PD ... 39

2.33. How to study Nash equilibria ? ... 40

2.34. Chapter outline ... 40

2.35. Extensive-form games ... 40

2.36. Strategies in dynamic games ... 40

2.37. Backward induction ... 41

2.38. Subgame perfection ... 42

2.39. Chapter outline ... 42

2.40. Repeated games ... 42

2.41. Utilities: Objectives in the repeated game ... 42

2.42. Strategies in the repeated game ... 43

2.43. Experiment: Tournament by Axelrod, 1984 ... 43

2.44. N-player PD: Tragedy of the Commons ... 44

2.45. Discussion on game theory ... 44

2.46. Reading for next time ... 45

3. 3 IT risk management ... 45

3.1. Security is risk management ... 45

3.2. Security is risk management ... 45

3.3. More concepts ... 45

3.4. Risk management (simplified) ... 46

3.5. Goal of risk management ... 47

3.6. Risk management lifecycle ... 47

3.7. Risk management standards ... 47

3.8. Risk Management process (ENISA) ... 47

3.9. Risk Management process (Risk-IT) ... 48

3.10. Risk management phases ... 49

3.11. Decision-makers ... 50

3.12. Risk management planning and governance ... 50

3.13. Behavior towards risks ... 51

3.14. Key factors to success ... 51

3.15. Risk assessment ... 52

3.16. Risk assessment ... 52

3.17. Risk analysis - flowchart ... 53

3.18. Measuring risks: simplified ... 53

3.19. Percieved composite risk (PCR) ... 54

3.20. PCR: calculate weights ... 54

3.21. PCR example ... 54

3.22. ALE method's "failure" ... 55

3.23. Improved methods ... 55

3.24. Improved methods (cont'd) ... 56

3.25. Quantitative risk management ... 56

3.26. Risk treatment - options ... 57

(5)

3.27. Risk treatment - controls ... 57

3.28. Risk treatment - action plan ... 58

3.29. Risk monitoring and review ... 58

3.30. Risk communication ... 58

3.31. Reading for next time ... 58

4. 4 Security investments ... 59

4.1. Defense is a team-work ... 59

4.2. Threat model ... 60

4.3. Interdependent security model ... 62

4.4. Interdependent security model ... 62

4.5. Interdependent security model ... 62

4.6. Internalizing externalities ... 63

4.7. Free riding ... 63

4.8. Game model ... 63

4.9. Results - Nash equilibria - total effort ... 64

4.10. Results - Nash equilibria - weakest link ... 64

4.11. Results - Nash equilibria - best shot ... 65

4.12. Results - Increasing N ... 66

4.13. Results - Liability ... 66

4.14. Security investment is not so simple ... 66

4.15. Security investment is not so simple ... 67

4.16. Results ... 67

4.17. Results ... 67

4.18. More about efficiency ... 68

4.19. Efficiency of selfish investments ... 68

4.20. Price of Anarchy ... 68

4.21. Example ... 69

4.22. Effective investment model ... 70

4.23. Bad traffic model ... 71

4.24. Bad traffic model ... 71

4.25. Improvement in technology ... 72

4.26. Reading for next time ... 72

5. 5 Cyber-insurance ... 73

5.1. Information security: the big picture ... 73

5.2. Spam filtering strategies ... 75

5.3. Model ... 76

5.4. ISP behavior ... 76

5.5. ISPs' role is botnet and spam mitigation ... 77

5.6. ISPs responsibility ... 77

5.7. Data collection ... 78

5.8. Botnets as spam sources ... 78

5.9. Data collection methodology (1/2) ... 78

5.10. Data collection methodology (2/2) ... 78

5.11. ISPs as critical points ... 79

5.12. ISPs as critical points - data (1/2) ... 79

5.13. ISPs as critical points - data (1/2) ... 80

5.14. ISPs performance (1/3) ... 81

5.15. ISPs performance (2/3) ... 82

5.16. ISP performance (3/3) ... 82

5.17. Intermediate summary ... 83

5.18. Incentive issues for ISPs ... 83

5.19. Can governments help? ... 83

5.20. Government intervention ... 84

5.21. Proposed workflow ... 84

5.22. Key benefits ... 86

5.23. Issues and objections ... 86

5.24. Reading for next time ... 86

6. 6 Interdependent security ... 86

6.1. Information security investment trends ... 86

6.2. Information security investment trends ... 87

(6)

6.3. Risk management phases ... 87

6.4. Budget constraints ... 87

6.5. Modeling security investments ... 88

6.6. Modeling security investments ... 88

6.7. Modeling security investments (cont'd) ... 88

6.8. Analysis ... 89

6.9. Results S1 ... 89

6.10. Results S1 - invest in high-risk vulnerabilities ... 89

6.11. Results S2 ... 90

6.12. Results S2 - invest in mid-range vulnerabilities ... 90

6.13. Results summary ... 91

6.14. GandL extensions ... 91

6.15. GandL extensions ... 91

6.16. Timing investments ... 92

6.17. Timing investments ... 92

6.18. Iterated security investments ... 93

6.19. Security investment with uncertainty ... 93

6.20. Iterated Weakest Link (IWL) model ... 94

6.21. Iterated Weakest Link (IWL) model ... 95

6.22. Iterated Weakest Link (IWL) model ... 95

6.23. Iterated Weakest Link (IWL) model ... 95

6.24. IWL - Total profit ... 96

6.25. Reactive defense summary ... 96

6.26. Pentesting model ... 96

6.27. Pentesting model ... 97

6.28. Pentesting model ... 97

6.29. Pentesting model ... 97

6.30. Extensive form solution ... 98

6.31. Pentesting model ... 98

6.32. Pentesting helps - costs ... 98

6.33. Pentesting helps - total profit ... 99

6.34. Return on pentesting (ROPT) ... 99

6.35. Pentesting summary ... 100

6.36. Reading for next time ... 100

7. 7 Vulnerabilites and patches ... 100

7.1. What is spam? ... 100

7.2. How does spam look like? ... 101

7.3. Defenses ... 101

7.4. Spam is for profit ... 101

7.5. Costs for spammers ... 102

7.6. Spam conversion rate ... 102

7.7. Storm botnet measurement study ... 102

7.8. Storm botnet infiltration ... 104

7.9. Measurements: conversion line ... 104

7.10. Time to click ... 104

7.11. Effects of blacklisting ... 105

7.12. Geographic distributions of converters (1/2) ... 106

7.13. Geographic distributions of converters (2/2) ... 106

7.14. Conversion calculation ... 107

7.15. Next step: spam value chain ... 107

7.16. Understand the spammers' value chain ... 107

7.17. Pharmacy Express ... 108

7.18. Data collection and processing ... 108

7.19. Data source ... 109

7.20. Affiliate programs and infrastructure ... 109

7.21. Infrastructure sharing ... 110

7.22. Takedown potential ... 111

7.23. Take-away ... 111

7.24. Reading for next time ... 112

8. 8 Information sharing ... 112

(7)

8.1. Goal of risk management ... 112

8.2. Software vulnerabilities lifecycle ... 112

8.3. Incentive issues ... 113

8.4. Incentive issues ... 113

8.5. Incentive issues - software production ... 113

8.6. Incentive issues ... 113

8.7. Incentive issues - Vulnerability discovery ... 113

8.8. Classification of vulnerability markets ... 114

8.9. Vulnerability challenges ... 114

8.10. Vulnerability brokers ... 114

8.11. Exploit derivatives ... 114

8.12. Exploit derivative market ... 115

8.13. Exploit derivative market ... 115

8.14. Cyber-insurance ... 115

8.15. Comparison - functions ... 116

8.16. Comparison - market efficiency ... 116

8.17. Comparison of vulnerability markets ... 116

8.18. Critiques of a market-based approach ... 117

8.19. Is it worth discovering vulnerabilities? ... 117

8.20. Vulnerability lifecycle ... 117

8.21. Costs of a vulnerability ... 118

8.22. Costs of a vulnerability ... 118

8.23. Main conclusion ... 119

8.24. Incentive issues ... 119

8.25. Incentive issues - Applying patches ... 119

8.26. Model - game ... 120

8.27. Model - costs ... 120

8.28. Centralized system ... 120

8.29. Decentralized system ... 121

8.30. Coordination schemes ... 121

8.31. Reading for next time ... 121

9. 9 Intermediaries and regulation ... 121

9.1. Key factors in defense ... 121

9.2. Information sharing in theory ... 122

9.3. What kind of info to share? ... 122

9.4. A little history... ... 122

9.5. Disclosure arguments (1/2) ... 123

9.6. Disclosure arguments (2/2) ... 123

9.7. Incentives in info sharing ... 123

9.8. Assumptions ... 124

9.9. Infosharing model ... 124

9.10. Sharing is exogeneous ... 124

9.11. Sharing is endogeneous (1/2) ... 124

9.12. Sharing is endogeneous (2/2) ... 125

9.13. Sharing security info in practice ... 125

9.14. DShield for log sharing ... 125

9.15. Blacklisting ... 125

9.16. Black/white/grey/red listings ... 126

9.17. Email handling behavior ... 126

9.18. DNS blacklists (DNSBLs) ... 126

9.19. Blacklisting criticism ... 126

9.20. How useful is the information? ... 127

9.21. Highly Predictive Blacklisting (HPB) ... 127

9.22. HPB properties ... 127

9.23. Results ... 127

9.24. Phishing ... 128

9.25. Cooperative defense against phishing ... 128

9.26. Phishing defense ... 129

9.27. Results (1/3) ... 129

9.28. Results (2/3) ... 129

(8)

9.29. Results (3/3) ... 130

9.30. Sharing phishing URLs ... 130

9.31. Phish-market protocol ... 130

9.32. Protocol overview ... 130

9.33. Sharing phishing URLs ... 131

9.34. Reading for next time ... 131

10. 10 Underground economy ... 131

10.1. What is privacy? ... 131

10.2. Anonimity or identifiability? ... 132

10.3. Anonimity or identifiability? (2) ... 132

10.4. Anonimity or identifiability? (3): the anonimity paradox ... 132

10.5. Anonimity or identifiability? (4) ... 133

10.6. Anonimity or identifiability? (5) ... 133

10.7. Anonimity or identifiability? (6): pro and contra ... 133

10.8. Anonimity or identifiability? (7): solving the anonimity paradox ... 133

10.9. Anonimity or identifiability? (8): solving the anonimity paradox ... 134

10.10. Anonimity ... 134

10.11. Anonimity (2) ... 135

10.12. Anonimity (3) ... 136

10.13. Unlinkability ... 136

10.14. Unlinkability (2) ... 137

10.15. Privacy related notions ... 137

10.16. Privacy metrics (1/2) ... 138

10.17. Privacy metrics (2/2) ... 138

10.18. Why is privacy a question of economics? ... 138

10.19. Off-line vs. on-line identities ... 139

10.20. Personal Information as an Economic Good ... 139

10.21. Personal Information as an Economic Good ... 139

10.22. Personal Information as an Economic Good ... 140

10.23. User profiling techniques ... 140

10.24. Web tracking: Cookies ... 140

10.25. Web tracking: Cookies ... 140

10.26. Web tracking: Cookies ... 141

10.27. Web tracking: Cookies ... 141

10.28. Web tracking: Super Cookies ... 142

10.29. Web tracking: Browser Fingerprinting ... 142

10.30. Inducing customers to try new goods ... 143

10.31. Collecting and controlling information ... 143

10.32. The value of private information: theory (1/2) ... 144

10.33. The value of private information: theory (2/2) ... 144

10.34. The value of private information: practice ... 144

10.35. The value of private information: practice ... 145

10.36. Problem with anonymity ... 146

10.37. Incentives for anonymity ... 147

10.38. Anonymity systems take-away ... 147

10.39. Price discrimination ... 148

10.40. Price Discrimination vs. Privacy ... 148

10.41. Price Discrimination vs. Privacy ... 148

10.42. Who should protect your privacy? ... 148

10.43. Solutions ... 149

10.44. Reading for next time ... 149

11. 11 Economics of privacy ... 149

11.1. Social networks and privacy ... 149

11.2. Economics issues in privacy ... 150

11.3. Survey of social networking sites ... 150

11.4. Evaluation methodology ... 150

11.5. Promotion techniques ... 150

11.6. Data collection during sign-up ... 151

11.7. Privacy controls ... 151

11.8. Invasive features ... 152

(9)

11.9. Comparison of privacy policies ... 152

11.10. Privacy affecting factors ... 152

11.11. Summary of the survey ... 152

11.12. User control over private information ... 153

11.13. Facebook privacy experiment ... 153

11.14. Third-party apps on FB ... 153

11.15. A two-sided market ... 153

11.16. Monetizing Facebook's app market ... 154

11.17. Facebook's and apps' informational market power ... 154

11.18. Frictionless adding of apps until 2010 ... 154

11.19. Privacy and security issues in the past ... 155

11.20. Regulatory intervention ... 155

11.21. Measurement study ... 156

11.22. Five app categories ... 156

11.23. Popularity of specific permissions ... 157

11.24. Scale of data requests ... 158

11.25. Example: "user_photos" permission ... 158

11.26. Discussion: A messy market ... 158

11.27. Other studies: users' confusion ... 159

11.28. Privacy improvements ... 159

11.29. Users' confusion about permissions ... 159

11.30. Design Principles ... 159

11.31. Basic design ... 159

11.32. Design extensions: checkbox design ... 160

11.33. Checkbox-app activity design ... 160

11.34. Checkbox-signal design ... 161

11.35. Checkbox-app activity-signal design ... 162

11.36. Experiment subjects ... 163

11.37. Implementation of treatments ... 163

11.38. Study protocol ... 164

11.39. Results: installation behavior ... 164

11.40. Results: overall information release ... 164

11.41. Results: opt-out decisions of installers ... 165

11.42. Distribution of opt-out behaviors ... 166

11.43. Quick summary ... 166

11.44. How should one interpret results? ... 166

11.45. Reading for next time ... 167

12. 12 Economics of privacy in social networks ... 167

12.1. Security usability problems ... 167

12.2. Usability requirements ... 167

12.3. What is usability? ... 168

12.4. PGP 5.0 usability study ... 168

12.5. Usability of PGP ... 169

12.6. Evaluation methods ... 169

12.7. Cognitive walkthrough ... 169

12.8. Cognitive walkthrough ... 170

12.9. Cognitive walkthrough ... 170

12.10. Cognitive walkthrough ... 170

12.11. Cognitive walkthrough ... 171

12.12. Cognitive walkthrough ... 171

12.13. Cognitive walkthrough ... 172

12.14. Cognitive walkthrough ... 172

12.15. Cognitive walkthrough ... 172

12.16. User test ... 173

12.17. User test ... 173

12.18. User test ... 173

12.19. User test ... 173

12.20. User test ... 174

12.21. User test ... 174

12.22. User test ... 174

(10)

12.23. Conclusion/Questions ... 175

12.24. User perception of security ... 175

12.25. Security Indicators ... 175

12.26. Security Indicators ... 175

12.27. More Security Indicators ... 176

12.28. More Security Indicators ... 176

12.29. More Security Indicators ... 176

12.30. More Security Indicators ... 177

12.31. More Security Indicators ... 177

12.32. More Security Indicators ... 177

12.33. Security Toolbar Abstractions ... 178

12.34. Study Scenario ... 178

12.35. Study Scenario ... 178

12.36. Attack Types ... 181

12.37. Security Toolbar Display ... 182

12.38. Attack Pattern ... 182

12.39. Recruitment ... 182

12.40. Spoof Rates With Different Toolbars ... 183

12.41. Spoof Rates With Different Attacks ... 183

12.42. Why Did Users Get Fooled? ... 183

12.43. Results ... 184

12.44. Conclusions ... 184

12.45. Reading for next time ... 184

13. 13 Security adoption and usability ... 185

13.1. Risk management process ... 185

13.2. Risk treatment - options ... 185

13.3. Insurance for cyber-risk sharing ... 185

13.4. Which risks are covered? ... 186

13.5. Single-user case ... 186

13.6. Single-user insurance ... 187

13.7. Cyber-insurance influencing factors ... 187

13.8. Information security risk management ... 187

13.9. Cyber-insurance: correlation ... 188

13.10. Correlated incidents ... 188

13.11. Supply-side: risk arrival with correlation ... 188

13.12. Supply-side: risk arrival with correlation ... 188

13.13. Demand-side: security risk management ... 189

13.14. Demand-side: insurance decision ... 189

13.15. Supply-side: estimation of risks ... 190

13.16. Supply-side: estimation of risks ... 190

13.17. Market equilibrium with correlation ... 191

13.18. Market equilibrium existence ... 191

13.19. Cyber-insurance: interdependence ... 192

13.20. Interdependence simple model ... 192

13.21. Cyber-insurance game ... 193

13.22. Market types: monopoly market ... 193

13.23. Market types: competitive market ... 194

13.24. Market types: competitive market ... 194

13.25. Insurance policies by ISPs ... 194

13.26. Interdependent security of ISPs ... 195

13.27. Cyber-insurance: asymmetric information ... 195

13.28. Cyber-insurance: moral hazard ... 195

13.29. Cyber-insurance: moral hazard ... 195

13.30. Cyber-insurance solution: avoid moral hazard ... 196

13.31. Cyber-insurance: adverse selection ... 197

13.32. Insurance policies and Nash equilibria ... 197

(11)

Economics of Security and Privacy

1. 1 Introduction

1.1. First look - quick overview

• What is security?

• Why do we care about economics here?

• Prevent or detect/block?

• How much is enough to invest in defense?

• We work together, or not?

• Who is responsible?

• Collateral - privacy

• and much more...

1.2. Security

• Merriam-Webster, [4b, (1)]:

• measures taken to guard against espionage or sabotage, crime, attack, or escape

• Dictionary.com, [1]:

• freedom from danger, risk, etc.; safety

• Wikipedia.com:

• Security is the degree of protection against danger, damage, loss, and crime.

• Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction.

• Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients.

• Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer.

1.3. Our definition

• security: prevention or detection of an attack on the computer system

• attack: deliberate attempt to compromise the intended use of a computer system

• a few important points

• attacker: a malicious entity whose aim is to prevent the users of the computing system from achieving their goal (primarily privacy, integrity, and availability of data)

(12)

• security vs. safety

• Why economics of computer security?

• strategic adversary: rational, profit-seeking

• in general: see "Security Protocols" course from Prof. Levente Buttyan (Hírközlő rendszerek biztonsága szakirány)

1.4. More definitions

• vulnerability

• attacks usually exploit vulnerabilities

• a vulnerability is a flaw or weakness in the system's design, implementation, or operation and management

• most systems have vulnerabilities, but not every vulnerability is exploited

• whether a vulnerability is likely to be exploited depends on the difficulty of the attack and the perceived benefit of the attacker

• threat

• a possible way to exploit vulnerabilities

• a potential attack

1.5. More on attacks

• passive attack

• requires no intervention into the operation of the system

• typically consists in the passive acquisition of some information that should not be available to the attacker

• typical examples:

• eavesdropping message contents

• traffic analysisgaining knowledge of data by observing the characteristics of communications that carry the dataeven if message content is encrypted, an attacker can stilldetermine the identity and the location of the communicating partiesobserve the frequency and length of the messages being exchangedguess the nature of the communication

• difficult to detect, should be prevented

1.6. More on attacks

• active attack

• requires an active intervention into the operation of the system

• typical examples:

• masquerade (spoofing)an entity pretends to be a different entity

• replaycapture and subsequent retransmission of data

(13)

• modification (substitution, insertion, destruction)(some parts of the) legitimate messages are altered or deleted, or fake messages are generatedif done in real time, then it needs a "man in the middle"

• denial of servicenormal use or management of the system is prevented or inhibitede.g., a server is flooded by fake requests so that it cannot reply normal requests

• difficult to prevent, should be detected

1.7. Examples of attacks

• password sniffing in FTP

• DoS against a web server

• spam

1.8. Communication security - a simple view

Alice Bob

Malice

(14)

1.9. Communication security services

CIA principles

• confidentiality

• protection of information from unauthorized disclosure

• information can be

• content of communications (content) confidentiality

• meta-information (derived from observation of traffic flows) traffic flow confidentiality

• integrity protection

• aims to detect message modification and replay

• provides assurance that data received are exactly as sent by the sender

• in case of a stream of messages (connection oriented model), integrity means that messages are received as sent, with no duplication, modification, insertion, deletion, reordering, or replays

• availability

• the service is reachable for the users

1.10. Communication security services

• authentication

• aims to detect masquerade (spoofing)

• provides assurance that a communicating entity is the one that it claims to be

• peer entity authentication

• data/message origin authentication

• non-repudiation

• provides protection against denial by one entity involved in a communication of having participated in all or part of the communication

• non-repudiation of message origin

• non-repudiation of message delivery

1.11. Placement of security services

• some services can more naturally be implemented at the application layer (e.g., non-repudiation)

• some services better fit in the link layer (e.g., traffic flow confidentiality)

• but many services can be provided at any layer (e.g., authentication, confidentiality, integrity)

• lower layer (e.g., link-by-link encryption):

• services are generic, can be used by many applications

(15)

• protection mechanisms are transparent to the user

• higher layer (e.g., end-to-end authentication):

• services are more application specific

• more user awareness

1.12. Security concepts summary

• basic concepts

• security, attack, vulnerability, threat

• passive vs. active attacks

• eavesdropping, traffic analysis, masquerade (spoofing), modification, replay, denial of service

• main communication security services: confidentiality, integrity, availability, authentication, non- repudiation

• some real world examples

• ARP spoofing, e-mail forgery, eavesdropping Telnet and FTP passwords

1.13. Internet communication

content

provider

ISP

ISP

ISP

(16)

Hi Bob,

I'm Alice.

Alice

Bob

1.14. Internet communication

(17)

content

provider

ISP

ISP

ISP

Alice

Bob

1.15. When things go wrong

(18)

content

provider

ISP

ISP

ISP

ISP

Hi Bob,

I'm Alice.

Malice

(19)

Alice

Bob

[fragile]

1.16. When things go wrong

content

provider

ISP

ISP

ISP

(20)

ISP

Malice

Alice

Bob

Hi Bob,

Want to buy Vi4gr4?

Click: rndm.medpillsrx.ru

What is Rx?

1.17. The engineering solution

• develop secure software

• design a public key infrastructure for authentication and data confidentiality

• raise firewalls to block dangerous connections

• compile blacklists to block malicious IP addresses (from sending emails)

• ...

So why do we still have security issues?

(21)

1.18. Information security simplified

• attacker's advantage attack

cheap

proactive

easy to measure illegal

Malice defense

expensive

reactive

hard to measure

must be lawful

Bob

1.19. Information security: the big picture

content

provider

ISP

ISP

(22)

Bob

1.20. Information security: the big picture

content

provider

ISP

ISP

Bob

(23)

Carol, Dave, ...

1.21. Information security: the big picture

content

provider

ISP

ISP

Bob

software

vendor

Carol, Dave, ...

1.22. Information security: the big picture

(24)

content

provider

ISP

ISP

Bob

software

vendor

Carol, Dave, ...

ISP

Alice

domain

hosting

1.23. Information security: the big picture

(25)

content

provider

ISP

ISP

Bob

software

vendor

Carol, Dave, ...

ISP

Alice

domain

hosting

domain

registrar

(26)

1.24. Information security: the big picture

content

provider

ISP

ISP

Bob

software

vendor

Carol, Dave, ...

ISP

Alice

domain

hosting

ISP

(27)

Malice

domain

hosting

domain

registrar

1.25. Information security: the big picture

content

provider

ISP

ISP

Bob

software

vendor

Carol, Dave, ...

(28)

ISP

Alice

domain

hosting

security

company

ISP

Malice

domain

hosting

domain

registrar

1.26. Information security: the big picture

content

provider

ISP

ISP

(29)

Bob

software

vendor

Carol, Dave, ...

ISP

Alice

domain

hosting

security

company

cyber-insurance

company

ISP

Malice

domain

hosting

domain

registrar

1.27. Our journey

Bob

security investment

ISP

ISP control,

(30)

botnet takedown

regulations

privacy

security vs. privacy,

user valuation

software

software updates,

vulnerability markets

security

company

security lemon market,

information sharing

cyber-insurance

asymmetric information,

correlated large-scale events

Malice

understanding the adversary

interdependent security

Carol, Dave, ...

1.28. Reading for next time

• R. Anderson, "Why Information Security is Hard - An Economic Perspective," ACSAC 2001 optional:

• Hal Varian, "Managing Online Security Risks," NY Times, 2001

• 10 mins questions

1.29. Information security incentives

• Common view: technology solves the problem

(31)

• buy better software

• frequent updates to fix vulnerabilities

• hire a security expert for audit Why are systems still unsecure?

Answer: misaligned incentives

1.30. Information security incentives

• incentives for the parties

• Bob: system functional (email, web)

• security is not the main goal for users

• Malice: money (exploit resources)

• Bob's information security investment

• buy better software

• frequent updates to fix vulnerabilities

• hire a security expert for audit

• What is the optimal security investment for Bob?

• given: service infrastructure, threat model, security budget, user population, time horizon Economics analysis gives the answer

Bob

Malice

1.31. Misaligned incentives

Examples:

• fraud against ATMs

(32)

• medical payment system (supported by insurers)

• Common Criteria evaluation (by vendor commissioned third -party) party to implement protection party to suffer !

1. incentive issue: liability

1.32. Misaligned incentives: DDoS

• One more example: DDoS attack

content

provider

ISP

ISP

Bob

(33)

Carol, Dave, ...

1.33. Misaligned incentives: DDoS

• One more example: DDoS attack

• variant of the Tragedy of the CommonsG. Hardin, "The tragedy of the commons," Science, 1968

content

provider

ISP

ISP

(34)

Bob

Carol, Dave, ...

1.34. Misaligned incentives: DDoS

• One more example: DDoS attack(Tragedy of the Commons)

content

provider

ISP

ISP

(35)

Bob

Carol, Dave, ...

2. incentive issue: free-riding

1.35. Possible solution

• transfer liability to the ISPs

• technically capable

• aggregate traffic

• legally accessible

• more on the topic in Chapter 8

(36)

Any thoughts?

1.36. IT markets

Three properties:

• networks effect - value grows with user base

• high fixed costs and low marginal costs

• lock-in effect quickly to the market

monopoly products

3. incentive issue:

networks effects and monopolistic markets

1.37. Reminder: attacker's advantage

• Attack is favored against defense

• attacker need one vulnerability to attack

• defender needs to find all vulnerabilities

• Remedies

• not all flaws are critical

• one patch can fix many flaw

• BUT: when you discover a bug

• use it for defend own or attack others?

• attack is more visible

1.38. Asymmetric info: Lemon markets

• The theory of asymmetric informationG. A. Akerlof. "The market for 'lemons': quality uncertainty and the market mechanism." In Quarterly Journal of Economics 84, 488 (1970)

• if the market is not transparent

• bad products drive out good products

• examples of security products:

• secure USB sticks, firewall products

• need good signals to increase transparency 4. incentive issue: asymmetric information

1.39. Economics of information security

(37)

Causes

no liability monopolistic software

asymmetric information

interdependence Consequences

no security investments

correlated events

lemon market

tragedy of the commons

1.40. Reading for next time

• R. Anderson, "Why Information Security is Hard - An Economic Perspective," ACSAC 2001

• M. Félegyházi and J.-P. Hubaux, Game Theory in Wireless Networks: A Tutorial, techreport, 2006 optional:

• H Varian, "Managing Online Security Risks," NY Times, 2001

• B. Schneier, Security as a lemon market,

http://www.wired.com/politics/security/commentary/securitymatters/2007/04/securitymatters_0419

• G. Hardin, "The tragedy of the commons," Science, 1968

• G. A. Akerlof. "The market for 'lemons': quality uncertainty and the market mechanism." In Quarterly Journal of Economics 84, 488 (1970)

2. 2 Introduction to microeconomics

2.1. Information security: the big picture

content

provider

ISP

(38)

ISP

Bob

software

vendor

Carol, Dave, ...

ISP

Alice

domain

hosting

security

company

cyber-insurance

company

ISP

Malice

domain

hosting

domain

registrar

2.2. Information security

• incentives for the parties

(39)

• Bob: system functional (email, web)

• security is not the main goal for users

• Malice: money (exploit resources)

• Bob's information security investment

• buy better software

• frequent updates to fix vulnerabilities

• hire a security expert for audit

• What is the optimal security investment for Bob?

• given: service infrastructure, threat model, security budget, user population, time horizon Economics analysis gives the answer

Bob

Malice

2.3. Why economics?

• technology - computers

• economics - humans

• Economics tools

• game theory

• risk analysis and management

• behavioral economics

• economic policy

2.4. Why economics?

• technology - computers

(40)

• economics - humans

• Economics tools

• game theory (Chapter 2)

• risk analysis and management (Chapter 3)

• behavioral economics

• economic policy

2.5. Game theory

• What is a game?

• Merriam-Webster:

• the analysis of a situation involving conflicting interests (as in business or military strategy) in terms of gains and losses among opposing players

• Wikipedia:

• In mathematics, game theory models strategic situations, or games, in which an individual's success in making choices depends on the choices of others

• Examples?

2.6. Chapter outline

• Introduction

• Static games

• Dynamic games

• Repeated games

2.7. What is a game?

• players, strategies, utilities, (rules)

(41)

2.8. Game: formal definition

• players are rational = try to maximize their utility

• game formulation: G = (P,S,U)

• P: set of players

• S: set of strategy functions

• U: set of payoff functions

2.9. Classification of games

complete info vs. perfect info !

2.10. Chapter outline

(42)

• B.1 Introduction

• B.2 Static games

• B.3 Dynamic games

• B.4 Repeated games

2.11. Prisoner's Dilemma

Flood and Dresher, RAND, 1950

Tucker, 1950

Alice

Bob

C - cooperate , D - defect (with the partner!)

2.12. Normal (strategic) form

• Two-player game

• players = ?

• strategies = ?

• utilities = ?

Alice

(43)

Bob

2.13. Strict dominance

Strict dominance: strictly best strategy, for any strategy of the other player(s)

Strategy strictly dominates if

where:

utility function of player

strategies of all players except player

Alice

Bob

In the PD, strategy D strictly dominates strategy C

2.14. Iterative strict dominance

(44)

Solution by iterative strict dominance:

Alice

Bob

solution

BUT strictly dominates

would result in a better outcome for both

Dilemma

2.15. A look into the crystal ball

two-player, static (single stage), discrete value PD with perfect and complete information

How to extend?

• multi-player

• iterated

• continuous values

• imperfect, incomplete

• (many) more games and solution concepts :)

2.16. Prisoner's Dilemma with a boss

No strictly dominated strategies !

2.17. PD with a boss in normal form

???

(45)

No strictly dominated strategies !

2.18. Weak dominance

Weak dominance: strictly better strategy for at least one opponent strategy

Strategy is weakly dominated by strategy if

with strict inequality for at least one

Iterative weak dominance BUT The result of the iterative weak dominance is not unique in general !

2.19. Iterative weak dominance

Bob

Alice

We found the NE! ...or not?

Iterative weak dominance BUT The result of the iterative weak dominance is not unique in general !

2.20. Nash equilibrium

Nash Equilibrium: no player can increase its payoff by deviating unilaterally

Prisoner's Dilemma

PD with the Boss

2.21. Finding Nash equilibria

Prisoner's Dilemma

(46)

PD with the Boss

2.22. Nash equilibrium - formal definition

Strategy profile s* constitutes a Nash equilibrium if, for each player ,

where:

payoff function of player

strategy of player

The best response of player to the profile of strategies is a strategy such that:

Nash Equilibrium = Mutual best responses

Caution! Many games have more than one Nash equilibrium

2.23. Finding NE in a complex game

Bob

Alice Bob

2.24. The Chicken game

Rebel without a cause, 1955

(47)

2.25. The Chicken game

Chicken game normal form

There is no strictly or weakly dominating strategy

There are two Nash equilibria

2.26. Mixed strategy Nash equilibrium

p: probability for Alice to chicken out

q: probability for Bob to chicken out

objectives

(48)

• Alice: choose to maximize

• Bob: choose to maximize

is a Nash equilibrium

2.27. Coordination: The Chicken game in practice

Chicken game normal form

Alice is the winner

Bob is the winner

both want to be winners

2.28. Head or tail game

Alice

Bob

There is no pure-strategy Nash equilibrium

Another example?

2.29. Nash Theorem

(49)

Nash, 1950:

Every finite game has an equilibrium point.

copyright: Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME

J. Nash, "Non-cooperative games," Annals of Mathematics, vol. 54 nr. 2, 1951

Heard of John Nash?

2.30. Efficiency of Nash equilibria

How to choose between several Nash equilibria ?

Pareto-optimality: A strategy profile is Pareto-optimal if it is not possible to increase the payoff of any player without decreasing the payoff of another player.

copyright: Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME

2.31. Finding Pareto-optimality in a complex game

copyright: Mark Felegyhazi, CrySyS Lab, Department of Telecommunications, BME

Alice Bob

2.32. Pareto-optimality in the PD

Alice

Bob

(50)

2.33. How to study Nash equilibria ?

Properties of Nash equilibria to investigate:

• existence

• uniqueness

• efficiency (Pareto-optimality)

• emergence (dynamic games, agreements)

2.34. Chapter outline

B.1 Introduction

B.2 Static games

B.3 Dynamic games

B.4 Repeated games

2.35. Extensive-form games

• usually to model sequential decisions

• game represented by a tree

• Example 3 modified: the Sequential Chicken game: Alice plays first, then Bob plays.

Bob

Alice

T C

Bob Bob

T C T C

(-10,-10) (1,-1) (-1,1) (0,0)

2.36. Strategies in dynamic games

• The strategy defines the moves for a player for every node in the game, even for those nodes that are not reached if the strategy is played.

(51)

strategies for Alice:

T, C

strategies for Bob:

TT, TC, CT and CC Alice

T C

Bob Bob

T C T C

(-10,-10) (1,-1) (-1,1) (0,0)

If they have to decide independently: three Nash equilibria

(T,CT), (T,CC) and (C,TT)

2.37. Backward induction

• Solve the game by reducing from the final stage

• Eliminates Nash equilibria that are incredible threats incredible threat: (C, TT)

Alice

T C

Bob Bob

T C T C

(-10,-10) (1,-1) (-1,1) (0,0)

(52)

2.38. Subgame perfection

• Extends the notion of Nash equilibrium

One-deviation property: A strategy conforms to the one-deviation property if there does not exist any node of the tree, in which a player can gain by deviating from and apply it otherwise.

Subgame perfect equilibrium: A strategy profile constitutes a subgame perfect equilibrium if the one- deviation property holds for every strategy in .

Finding subgame perfect equilibria using backward induction

Subgame perfect equilibria: (T, CT) and (T, CC) Alice

T C

Bob Bob

T C T C

(-10,-10) (1,-1) (-1,1) (0,0)

2.39. Chapter outline

B.1 Introduction

B.2 Static games

B.3 Dynamic games

B.4 Repeated games

2.40. Repeated games

• repeated interaction between the players (in stages)

• move: decision in one interaction

• strategy: defines how to choose the next move, given the previous moves

• history: the ordered set of moves in previous stages

• most prominent games are history-1 games (players consider only the previous stage)

• initial move: the first move with no history

• finite-horizon vs. infinite-horizon games

• stages denoted by (or )

2.41. Utilities: Objectives in the repeated game

• finite-horizon vs. infinite-horizon games

• myopic vs. long-sighted repeated game myopic:

(53)

long-sighted finite:

long-sighted infinite:

payoff with discounting:

2.42. Strategies in the repeated game

• usually, history-1 strategies, based on different inputs:

• other's behavior:

• other's and own behavior:

• payoff:

Example strategies in the Prisoner's Dilemma:

2.43. Experiment: Tournament by Axelrod, 1984

• any strategy can be submitted (history-X)

• strategies play the Repeated Prisoner’s Dilemma in pairs

• number of rounds is finite but unknown

• TFT was the winner

(54)

• second round: TFT won again REASONS:

• nice

• provocable

• not envious

• not too clever

2.44. N-player PD: Tragedy of the Commons

G. Hardin, "The Tragedy of the Commons," Science, 1968

source: Sharon Loxton, WikiCommons

2.45. Discussion on game theory

• Rationality

• Payoff function and cost

• Pricing and mechanism design (to promote desirable solutions)

• Infinite-horizon games and discounting

(55)

• Reputation and norms

• Cooperative games

• Imperfect / incomplete information

2.46. Reading for next time

• Blakley, B. and McDermott, E. and Geer, D., "Information security is information risk management,"

Proceedings of the 2001 workshop on New security paradigms, 2001

optional:

• L.D. Bodin, L.A. Gordon, M.P. Loeb, "Information security and risk management," Communications of the ACM, 2008

• J. Mounzer, T. Alpcan, and N. Bambos, "Integrated security risk management for it-intensive organizations,"

in Proc. of 6th Intl. Conf. on Information Assurance and Security (IAS 2010), Atlanta, GA, USA, August 2010

3. 3 IT risk management

3.1. Security is risk management

• risk:

• Merriam-Webster (1): possibility of loss or injury

• Dictionary (1): exposure to the chance of injury or loss; a hazard or dangerous chance:

• Wikipedia: Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed).

3.2. Security is risk management

• risk management: Wikipedia: risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities. CISA Review Manual: Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization.

3.3. More concepts

• vulnerability = a possibility to attack

• flaw or weakness in the hardware, software or design

• ex: software bugs

• threat agent (= attacker)

• threat = potential for a threat agent to exploit a vulnerability

(56)

• ex: disk is not encrypted, but the device is fixed anyway

• also need motivation for an attacker

• risk = threat realization with considered impact

3.4. Risk management (simplified)

(57)

risk manager

?

3.5. Goal of risk management

• vulnerabilities threats incidents losses

Goal: Minimize the costs associated with risks (threats)

3.6. Risk management lifecycle

source: Systems Engineering Fundamentals. Defense Acquisition University Press, 2001

3.7. Risk management standards

• ISO/IEC 27000 series - Information security management systems

• 27005:2011 - Information security risk management

• generally accepted guidelines of implementing information management systems and also serves to perform audits

• open source support: Enterprise Security Information System (ESIS)

• NIST SP 800-30

• ISACA Risk IT

• Open Source Security Testing Methodology Manual (OSSTMM)

• ISO/IEC 15408 - Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria or CC)

3.8. Risk Management process (ENISA)

(58)

European Network and Information Security Agency (ENISA), "Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools," June 2006

3.9. Risk Management process (Risk-IT)

(59)

ISACA, "Risk-IT framework," 2009

3.10. Risk management phases

• risk governance (RG)

• risk mgmt context

• define criteria

• profile definition

• requirements

• resources

• risk monitoring and review (RM)

• monitoring

• communication

• awareness

(60)

• risk assessment (RA)

• risk analysis

• identification

• estimation

• risk evaluation

• risk treatment (RT)

• prevent

• mitigate

• transfer

• accept

3.11. Decision-makers

RG

• senior management

• chief information officer (CIO)

• information system security officer (ISSO)

• system and information owners

• security practitioners (sysadmins, security specialists)

• security awareness trainers ISACA, "Risk-IT framework," 2009

3.12. Risk management planning and governance

RG

• develop an enterprise risk management strategy

• establish and maintain a risk management plan

• risk appetite

• risk tolerance

• ensure that IT risk management is embedded in the system

• integrate with business processes

• provide resources for risk management

• establish responsibilities and accountability generic control of risk management

(61)

3.13. Behavior towards risks

RG

• risk appetite: the property of engaging with risks

• risk-averse - risk-neutral - risk-taking

• risk tolerance: tolerance towards the difference from the risk level as defined in risk appetite

ISACA, "Risk-IT framework," 2009

3.14. Key factors to success

RG

• continuous support from top management

• central management - common strategy

• successful integration with business processes

• optimize tasks and controls (avoid over-control)

• compliant with company's business philosophy

• continuous training

• never-ending process!

ENISA, "Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools," June 2006

(62)

3.15. Risk assessment

RA

NIST SP800-30, "Risk Management Guide for Information Technology Systems," July 2002

3.16. Risk assessment

RA

• Risk assessment

• identification

• persons, assets and system info

• technical / mgmt / operational controls

• information gathering - info sources

• threat sources - attacker model

• vulnerability identification

• analysis / estimation

• control analysis - security options (ROSI)

• categorize threats by likelihood

• impact analysis - system critical incidents

• evaluation

• risk determination

(63)

3.17. Risk analysis - flowchart

RA

NIST SP800-30, "Risk Management Guide for Information Technology Systems," July 2002

3.18. Measuring risks: simplified

RA

• Annualized Loss Expectancy (ALE)

• ALE = ARO * SLE = ARO * AV * EF

• ARO - Annualized Rate of Occurrence (likelihood)

• AV - Asset Value (impact)

• EF - Exposure Factor

• example:

• prob. of a server failing 0.01

• data worth $500.000

• most probably 30% destroyed

• ALE = 0.01 *$500000 * 0.3 = $15000 Problems

(64)

3.19. Percieved composite risk (PCR)

RA

L.D. Bodin, L.A. Gordon, M.P. Loeb, "Information security and risk management," Communications of the ACM, 2008

assume

where x is in $ millions

• expected loss

• expected severe loss

• standard deviation of the loss

3.20. PCR: calculate weights

RA

• CISO decides about the importance of these factors

• A + B + C = 1 and A,B,C > 0

• weights calculated using Analytic Hierarchy Process (AHP)(check on Wikipedia, it's quite interesting)

3.21. PCR example

RA

• A = 0.4, B = 0.4, C = 0.2

(65)

3.22. ALE method's "failure"

RA

• ALE method's "failure"

• too many details

• difficult to implement

• number of scenarios is too high

• technology view on risk

• deterministic rather than probabilistic

• dependence on information

• new methods

• simplify risk analysis

• mostly short-term

K. Soo Hoo, "How Much Is Enough? A Risk-Management Approach to Computer Security," PhD thesis, Stanford 2000

3.23. Improved methods

RA

simplify - tractable way to analyze risks

• Integrated Business Risk management framework

• focuses on impact and added value

(66)

• security like other business risks

• simplifies management

• valuation-driven methods

• no data

• ignore incident likelihoods and focus on asset value

• suffer the simplification

3.24. Improved methods (cont'd)

RA

• scenario analysis

• often used to dramatize impact (by consultants)

• limited scope

• good practices

• common engineering response

• conformance to policies results in (some) protection

• also protects against liability claims

• de-coupled from data collection and analysis

• efficiency depends on

• compliance costs

• process to define practices / rules

3.25. Quantitative risk management

RA

• qualitative quantitative methods share information

• Key enabler: information = data (potentially historic)

• vulnerabilities

• incidents

• losses

• effectiveness of countermeasures

• steps

• register incidents - proper forensics

• report

• summarize in a central(ized) database

(67)

• driving force

• insurance ??? (more in Chapter 10)

• governments ?

3.26. Risk treatment - options

RT

determine the appropriate controls

• avoidance

• mitigation

• eliminate incidents - testing

• reduce impact

• sharing / transfer

• disclaimer: no party is responsible

• agreement: responsibility transferred

• compensation

• risk pooling: share losses

• risk hedging: bet for losses

• acceptance / retention

• self-insure

• accept losses partially from:

Blakley, B. and McDermott, E. and Geer, D., "Information security is information risk management,"

Proceedings of the 2001 workshop on New security paradigms, 2001

3.27. Risk treatment - controls

RT

• select risk treatment controls

• prevention

• firewall, authentication, locks

• detection

• IDS

• recovery

• backup, forensics

(68)

• management

• better data center for security information collection

• information sharing (more in Chapter 6)

• training / awareness

• employee training sessions

3.28. Risk treatment - action plan

RT

action plan = prioritize + implement actions / controls

• prioritize controls / actions

• cost-benefit analysis (more in Chapter 4)

• importance of risk (impact)

• effectiveness - difficult quantify benefit of unrealized losses (ROSI)

• get approval for the action plan - top mgmt support is essential

• implement the action plan

• develop a policy w/ security policy

• assign responsibility

• performance measures and reporting

• residual risks and acceptance

3.29. Risk monitoring and review

RM

• review and update processes and policies

• document each stage of the risk management process

• development and action plan (reasons and analysis)

• changes and efficiency

• legal basis

• reuse of information

3.30. Risk communication

RM

ISACA, "Risk-IT framework," 2009

3.31. Reading for next time

(69)

• H. Varian, System reliability and free riding, In Economics of Information Security, L. J. Camp, S. Lewis, eds. (Kluwer Academic Publishers, 2004), vol. 12 of Advances in Information Security, 2004

optional:

• H. Kunreuther and G. Heal, Interdependent security, Journal of Risk and Uncertainty, vol. 26 nr. 231, 2003

• Grossklags, J. and Christin, N. and Chuang, J., Secure or insure?: a game-theoretic analysis of information security games, WWW conference 2008

4. 4 Security investments

4.1. Defense is a team-work

Alice Malice

(70)

Carol, Dave,...Malice

4.2. Threat model

Alice

(71)

Malice

Carol

Bob

direct

indirect

indirect

(72)

4.3. Interdependent security model

• externality = cost or benefit to a party from an action she did not agree

• (in economics) transaction between two parties affects others

• positive: network effect (many people buying the same product)

• negative: pollution

• security investment? + or -

• contagion - viruses spread over computers

Kunreuther and Heal, "Interdependent security," Journal of Risk and Uncertainty 2002

4.4. Interdependent security model

• N players

• attacks hits own system directly or propagated from others

• airline game: check baggage and transfers

• p = prob of attack without investment

• q = prob attack comes from transfer

• Y = yield from operation

• L = loss in case of an incident

Kunreuther and Heal, "Interdependent security," Journal of Risk and Uncertainty 2002

4.5. Interdependent security model

• 2-player game

Alice

Bob

• same is true for N players...

Kunreuther and Heal, "Interdependent security," Journal of Risk and Uncertainty 2002

(73)

4.6. Internalizing externalities

• insurance (Chapter 10)

• problem of moral hazard

• fix by deductions or good monitoring

• liability

• attractive but high costs

• fines and subsidies

• artificially modify the payoff structure

• regulations and party inspections (Chapter 8)

• market coordination

• official certificates from associations

Kunreuther and Heal, "Interdependent security," Journal of Risk and Uncertainty 2002

4.7. Free riding

• system reliability is a public good = depends on joint effort

• results in free-riding = people try to cheat

• three models (Hirshleifer, 1983)

• total effort: check system logs

• weakest link: software vulnerabilities

• best shot: best sysadmin to catch an intruder

• Hirshleifer's results:

• weakest link - various symmetric Nash equilibria possible

• total effort - efficient amount of the public good and the Nash equilibrium amount constant as the number of contributors increases

• best shot - only players with the lowest cost contribute

Jack Hirshleifer. "From weakest-link to best-shot: the voluntary provision of public goods." Public Choice, 41:371-86, 1983.

4.8. Game model

• 2 players - computer systems

• payoff:

• is differentiable, increasing in , and is concave

(74)

• Total effort.

• Weakest link.

• Best shot.

• benefit/cost ratio -

H. Varian, "System Reliability and Free Riding," working paper 2004

4.9. Results - Nash equilibria - total effort

total effort: System reliability is determined by the agent with the highest benefit-cost ratio. All other agents free ride on this agent.

• unique equilibrium, where 1 player does all

H. Varian, "System Reliability and Free Riding," working paper 2004

4.10. Results - Nash equilibria - weakest link

weakest link: System reliability is determined by the agent with the lowest benefit-cost ratio

• range of equilibria, this one is Pareto-dominant

(75)

H. Varian, "System Reliability and Free Riding," working paper 2004

4.11. Results - Nash equilibria - best shot

best shot:

• NE where the agent with the highest benefit-cost ratio exerts all the effort.

• NE where the agent with the lowest benefit-cost ratio exerts all the effort.

(76)

H. Varian, "System Reliability and Free Riding," working paper 2004

4.12. Results - Increasing N

Intuitive result - increasing N:

Systems will become increasingly reliable as the number of agents increases in the total efforts case, but increasingly unreliable as the number of agents increases in the weakest link case.

4.13. Results - Liability

total effort: player with the least cost of effort should bear all the liability to avoid system failure

• standard result in the economic analysis of tort law

weakest link: strict liability is not adequate in general to achieve the socially optimal level of effort, and one must use a negligence (due care) rule to induce the optimal effort.

• also a standard result in liability law

H. Varian, "System Reliability and Free Riding," working paper 2004

4.14. Security investment is not so simple

• (security investment) is in fact two investments

• self-protection = prevent an incident from happening

• ex: software updates

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Personal bankruptcy regimes are of a great variety in the world, including the Eu- ropean countries. One important group of personal bankruptcy literature includes com-

It seems that stem cells are not a topic that people discuss among themselves if there is no personal connection involved for at least one of the parties (personal connection being

The IoT and the Smart City will engender huge growth in third-party data collection and storage, which will only expand with the IoT, presents new challenges to privacy and

Among the new methods of assessing academic careers, electronic databases off er a rapidly increasing set of personal data for analysis, and the opportunity to analyse the infl

At their meeting in 2005, the International Data Protection Commissioners issued a statement entitled: The Protection of Personal Data and Privacy in a Globalised World:

“optimal publication” paths, where this optimality depends on his/her own personal criteria. In other words, most of the time, such criteria determine the address of submission

This Information Security and Privacy Self-Assessment Tool (ISPSA tool) for internet users is modular, based on scientifically validated UISAQ questionnaire, OWL

Table 6: Information society indicators: security problems of individuals Percentage of individuals who used Internet within the last year and have, in the last 12 months,