SystemsSec 2016W Lecture 21: Difference between revisions

From Soma-notes
March 29 notes dump
 
mNo edit summary
 
(One intermediate revision by the same user not shown)
Line 5: Line 5:
'''Program Anaylsis'''
'''Program Anaylsis'''
    
    
  Problem address
- Problem address
- not clear on what they do
- not clear on what they do
- most technique do not scale to real world program
- most technique do not scale to real world program
Line 69: Line 69:




'''WebEval'''
'''WebEval'''  
   
     - malicious extension detector,  
     - malicious extension detector,  
- one statistic that horrifying
- one statistic that horrifying

Latest revision as of 23:08, 31 March 2016

Exam:

   - Similar format to Midterm
   - can bring laptop, Open book/laptop

Program Anaylsis

- Problem address - not clear on what they do - most technique do not scale to real world program - reference to covariety - company product for analyzing program for software flaws particular to security - one person co author of this paper

- twist under constrained? - fundamental problem of program analysis? false positive - cant do perfect analysis on program that scale - need to number of assumption - hav to assume things pessimisstically - get report of all that this could b bad - programmer wont look at this - how to give programmer context - using approach that increase false postive - input output nicely set - but if jump in the middle of the program who knows what it does ***- no way to know if all precondition has been met - so how is this at all a good idea?

- how did they save themselve? - liquid type inference - infer constraint onto the data as it goes through the program - if the program behaves like this here, it must be this state or that state - one technique they use - lets jump into middle of program - not try to analyize the program - annotation - SSL: false positive when looking at R2 data - start in middle of program assume everyfine and see how it goes - but limited constraint in it to reduce problem size

- only allow this to run in a hour - relatively fast - why do this? because if start at beginning can't reach to alot of parts because of constraint - solution we just gonna jump there - analyzing the program, symbolically executing, symbolic execution, EX: kinda like java eclipse where you set debugging point

- underconstraint - pro get to some point that u may never get there with Patch - is there differiential crash? so can compare nd see if the patch did something bad

why want to look at Patch this way? - why patch nasty - when u doing a patch - when something's broken, already deployed, might b code thats sitting around for years - potentially no one - can u get it right? Kinda of? ish? - as person doing patch not same level of understanding to previous owner - impact of the bug from security attack? - not likely as there are various security stuff that could cover it - not worth the spent resource look for this vs updating security

-if went to program analysis conference - laugh out of the room - timeline of paper is funny, paper they used were years ago, major gap in publication - could have been previously rejected by sub community - program analysis - and dumped to security community - as a security paper


WebEval

   - malicious extension detector, 

- one statistic that horrifying - 10% of whole - only 95% are accurate

major security problem - nothing on the user end that can counter act - why are we downloading extension? - added functionality


- security restriction of web broswer is too limiting - by installing extension means I WISH TO BREAK SECURITY POLICY

- what happen to web if give developer the permission they want

- bad things get distributed

- chrome extensions to prevent this - key things to have : permission model, - but developer give more permission first and then as they work on it they scale it

- set up the problem so that people could be sending our data at all time - so what are we doing to handle this issue?

- keep list of behaviour of malicious extension - binary classification - two set good or bad and try to classifier

- human used when high entropy from the classification


automated system can be get around - change the code, until it gets passed this to get around the restriction - evade the classification rule that are impelemented - so only way to find this is to use humans - classical problem of binary classification