This is a composite of a photogrammetric rendering ... - WordPress.com

0 downloads 103 Views 52MB Size Report
Jun 6, 2015 - hBp://www.dailymail.co.uk/sciencetech/ar]cle-‐2761272/This-‐NOT-‐real-‐woman-‐. Meet-‐Beryl-â€
This  is  a  composite  of  a  photogrammetric  rendering  of  this  ta2oo  of  my   grandmother’s  carpet   My  father  dedicated  his  phd  in  computer  science  in  the  60s  to  his  mother   As  teaching  him  how  to  code  by  watching  her  knit    

1  

Presenta?on  at  21st  century  photography:  art,  philosophy,  technique   5-­‐6  June  2015   University  of  the  Arts  London     h2ps://photoconference2015.wordpress.com/    

2  

I  want  to  describe  the  emergence  of  forms  of  surface  between  digital  photographs   that  has  been  mediated  by  computer  vision     I  want  to  set  out  an  argument  that     these  surfaces   have  the  poten?al  to     reshape  the  structure  of  thought.     And  I  want  to  propose  that  MESH  is  both     a  technological  artefact  of  the  structure  between  photographs  and     A  resonant  conceptual  model  

3  

my  PhD  crea;ve  prac;ce  has  been  framed  by     a  ques;on  on  the  nature  and  cons;tu;on  of  surface  in  digital  photographs.       Specifically:  Given  the  dematerialisa/on  of  the  photographic  image,     to  what  extent  can  a  photograph  be  regarded  as  having  a  surface?       The  short  answer  is  yes,     there  are  many  forms  in  which  surface  is  present  in  digital  photographs.       •  Photographs  have  a  fundamental  rela?onship  with  surface.     •  A  photograph  may  be  conceived  as  an  impression  of  surface  -­‐  of  light  reflected  off   surface;     •  a  photograph  is  itself  a  surface  –  a  two  dimensional  plane;     Be  it  a  physical  print  or  a  mathema?cal  concept   •  and  the  screen  may  be  conceived  as  a  form  of  surface.       I  have  also  come  to  the  posi?on  that     ‘dematerialisa?on’  is  NOT  a  given    

4  

Surface  is  simultaneously  a  material,  abstract  and  psychological  en;ty.       Surface  is  the  place  of  contact  and  separa;on,       transfer  and  shedding,     the  boundary  of  expansion  and  contrac;on.       It  is  a  powerful  guide  by  which  to  interrogate  an  en;ty,       an  environment  within  which  we  are  immersed.       In  this  context,  I  propose  to  employ  it  as  an  ontological  metaphor     with  which  to  visualize  a  shiD  in  shape  and  boundary.     Surface  and  depth  are  co-­‐crea;ve  and  indivisible       And  I  am  delighted  that  surface  appears  to  be  one  of  the  emergent  themes  of  this   conference.    

5  

Whilst  telling  this  story,         I  will  be  observing  two  intertwined    contradictory  trajectories:     •  That  technologies  create  paradigms     •  And  that  technologies  are  created  to  meet  desires            

6  

I  am  drawing  on  Flusser’s  model  of  how  technologies  shape  thought.       In  summary,  he  proposed  that       •  the  wri2en  word  facilitates  linear  thought;     •  that  photographs  are  surfaces,  two  dimensional,  and  facilitate  scanning  modes  of   thought  (Flusser,  2000);     •  whereas  computer  technologies  create  interconnected  networks  that  facilitate   more  complex  thinking       I  wish  to  consider  this  model  in  the  context  of     An  algorithmic  environments  for  digital  photographs.    

7  

As  has  Batchen  observed   whenever  somebody  invents  a  new  form  of  imaging  technology,     the  first  thing  they  do  is  make  an  image  of  their  child.         Perhaps  for  precisely,     imaging  technologies  are  created  BECAUSE     we  want  to  capture  images  of  our  children.     The  impulse  is  emo?onal  and  personal     Fox  Talbot’s  disappoin?ng  drawings  of  his  honeymoon  could  also  be  classed  with   these.     Morse’s  inven?on  of  morse  code  and  the  telegraph  system  following  the  death  of  his   wife       One  aspect  of  the  contested  place  of  photography  can  be  characterised  as  a  tussle   between  those  who  view  the  medium  as  inherently  ?ed  to  certain  technologies  and   those  who  define  photography  as  a  specific  set  of  prac?ces,  as  photographic  impulses   or  desire  (Batchen  1997  p.212,  Maynard  2010  p.29,  Warner  Marien  2012,  p.6  ).  In  his    

8  

Whilst  images  have  always  been  dialogic,     in  dialogue  with  and  connected  to  other  images,     this  has  been  amplified  and  made  more  tangible    by  the  algorithmic  turn.       As  a  result,  it  is  possible  to  visualise     the  rela;onships  between  images     has  having  geometry,  form     and,  therefore,  surface.     Specifically,  I  want  to  consider  this  in  two  forms     The  first  is  photogrammetry  –     reverse  image  search  engines  -­‐  such  as  Google  ‘Search  by  Image’.       Both  these  applica?ons  of  computer  vision     draw  out  rela?onships  between  images     that  have  both  surface  and  depth.        

9  

Art+Com  undertook  some  significant  visualiza?on  work  in  1995     that  gave  shape  to  moving  image  sequences.     Rather  than  posi?oning  the  screen  as  an  invisible  immobile  portal,     The  Invisible  Shape  of  Things  Past     described  the  movement  of  the  camera  image  through  space  and  ?me.     The  resul?ng  forms  were  3D  printed.       These  works  demonstrate  a  rela?onship  between  space,  surface,  ?me  and   movement     Joachim  Sauter  et  Dirk  Lüsebrink,  Invisible  Shape  of  Things  Past       Serial  vs  paralell  –  Gallaway     Internals  between  image  planes  can  measure  ?me  but  can  also  measure  space        h2ps://vimeo.com/95422036            34”  

10  

Art+Com  undertook  some  significant  visualiza?on  work  in  1995     that  gave  shape  to  moving  image  sequences.     Rather  than  posi?oning  the  screen  as  an  invisible  immobile  portal,     The  Invisible  Shape  of  Things  Past     described  the  movement  of  the  camera  image  through  space  and  ?me.     The  resul?ng  forms  were  3D  printed.       These  works  demonstrate  a  rela?onship  between  space,  surface,  ?me  and   movement  

11  

The  CCD  sensor  records  a  grid  of  measurements  of  light  intensi?es   Sample  points     It  is  this  data  that  facilitates  what  is  known  as  computer  vision     but  a  computer  does  not  ‘see’  in  the  cogni?ve  sense     that  a  human  subject  perceives  through  vision.       Rather,  computer  vision  is  the  func;on  of  a  set  of  algorithms     that  automate  a  compara;ve  search  for  a  match     between  a  paNerns  of  data     (Turek,  2011).       Computer  vision,  also  known  as  machine  vision,     has  found  applica?ons  in  a  range  of  selngs,     such  as  factory  automa?on  and  automated  naviga?on.       I  want  to  limit  considera?on  to  some  of  the  ways   in  which  computer  vision  is  impac?ng  on  photography.    

12  

s?tched  panormas  were  my  first  encounter     with  algorithmic  image  manipula?on     that  employed  a  form  of  computer  vision.      

13  

Working  with  overlapping  digital  photographs  of  interiors,     the  somware  iden?fied  the  matching  overlapping  elements  within  the  individual   captures   and  ‘s?tched’  them  together  to  create  panoramas  

14  

Both  these  images  are  built  from  the  same  data,  the  same  set  of  captures     Whereas  the  s?tched  panorama  fla2ens  and  unwraps  the  scene,  photogrammetry   calculates  the  rela?ve  shapes  of  the  content           -­‐-­‐-­‐-­‐-­‐  Mee?ng  Notes  (6/06/2015  14:37)  -­‐-­‐-­‐-­‐-­‐   2015  

15  

Not  the  full  set    

16  

Images  aligned  and  distorted  via  computer  vision  

17  

Interested  in  the  imperfec?ons,  the  wonky  bits,     the  messy  density  of  the  meshes   And  the  gaps  lem  where  the  program  perceives  blank  spaces.     Will  be  exhibited  via  a    virtual  reality  headset   You  will  be  inside  the  virtual  room  whilst  inside  the  gallery     photogrammetry  is  defined  by  Kyle  as:   …  methods  of  image  measurement  and  interpreta/on     in  order  to  derive  the  shape  and  loca/on  of  an  object     from  one  or  more  photographs  of  that  object.     In  principle,  photogrammetric  methods  can  be  applied  in  any  situa/on     where  the  object  to  be  measured  can  be  photographically  recorded.     The  primary  purpose  of  a  photogrammetric  measurement  is     the  three-­‐dimensional  reconstruc/on  of  an  object  in  digital  form     […]  or  graphical  form  (images,  drawings,  maps).  (Kyle  et  al.,  2013)     FYI,  this  object  will  be  exhibited  in  the  Tinning  Street  Gallery  via  a  virtual  reality   headset,  probably  Google  Cardboard    

18  

Imperfec?ons    -­‐  interested  in  the  wonky  bits,  the  messy  density  of  the  meshes     Will  be  exhibited  via  a    virtual  reality  headset   You  will  be  inside  the  virtual  room  whilst  inside  the  gallery      

19  

Photogrammetry  literally  demonstrates     the  emergence  of  a  surface  formed  by  the  rela;onship  between  images.       what  is  most  striking  is  the  ability  to  rotate  the  shape     and  examine  the  void  within  the  surface  of  the  3D  composite  image.     This  effect  is  not  dependant  on  one  par;cular  technique  or  technology.     Examples  include  MicrosoD  Photosynth  (2007),  Adobe  123D  Catch  (2009),  and   AgisoD  Photoscan  (2006).     Stereo-­‐photogrammetry  u?lises  the  shims  in  perspec?ve  generated  by  parallax,    the  rela?ve  posi?on  of  the  foreground  and  background,     to  build  an  image  surface  formed  around  a  3D  shape.       The  algorithm  calculates  the  shims  in  rela?ve  posi?on     between  foreground  and  background     in  order  to  calculate  the  three  dimensional  rela?onship  between  elements.     It  is  the  rela?onship  between  the  images  that  generates  the  shape.      

20  

21  

eg  -­‐  depth  data  generated  from  Google  Street  images   h2p://www.patriciogonzalezvivo.com/2014/pointcloudcity/  ]    

22  

  Reminiscent  of  Ariel  Caine’s  point  clouds  shown  yesterday  

23  

Point  cloud  then  connected  into  a  structure  via  polygon  mesh  

24  

25  

Benne2  2015  work  in  progress     Love  the  spider  web  effect  

26  

27  

Hyper  real    

28  

h2p://www.qfxdigital.co.uk/#!duologue-­‐memex/c1sem  

29  

30  

h2p://www.dailymail.co.uk/sciencetech/ar?cle-­‐2761272/This-­‐NOT-­‐real-­‐woman-­‐ Meet-­‐Beryl-­‐creepy-­‐lifelike-­‐3D-­‐virtual-­‐model-­‐using-­‐scans-­‐elderly-­‐lady.html    

31  

32  

33  

Photogrammetry  is  one  of  a  number  of  techniques  employed  to  create  3D  digital   scans.  For  example,  the  Smithsonian  x  3D,  the  Smithsonian  Ins?tute’s  3D   digitalisa?on  program,  uses  a  combina?on  of  techniques  that  includes  “laser   scanning,  structured  light  scanning,  and  DSLR  photogrammetry"  [Gates  2015].  These   three  techniques  are  clearly  seen  in  the  documenta?on  of  the  process  of  crea?ng  a   bust  of  President  Obama  [White  House  2014].      

34  

Whilst  photogrammetry  is  a  rapidly  emerging  area  of  mainstream  digital  imaging    –  a  number  of  phone  apps  are  available  -­‐     there  is  a  pre-­‐photography  historical  precedent     in  the  work  of  17th  century  sculptor  Bernini  (Co2er,  2008)  

35  

36  

Bust  of  Charles  I,  a2ributed  to  Jan  Blommendael  (Royal  Collec?on,  n.d.)  with  triple  portrait  of  Charles  I  by  Anthony  van  Dyck   c1635  (Royal  Collec?on,  c1635)  exhibited  together  ‘In  Fine  Style’  at  The  Queen's  Gallery,  Buckingham  Palace  (Royal  Collec?on,   2013).  Photo:  (Bates,  2013)    

   

37  

This  poten;al  within  photography     was  acknowledged  at  the  beginning  of  the  medium/technology.     In  Arago’s  1839  announcement  to  the  Academy  of  Sciences  in  Paris     on  ‘a  method  of  capturing  images  with  a  camera’,     Arago  noted  the  implica;ons  of  photography  for  the  efficient  collec;on  of   topographical  data     (Barger  and  White,  2000,  p.  25).       Arago  viewed  the  daguerreotype  technique  as  a  process  for  scien;fic  analysis,    a  means  of  mapping  and  measuring  (Tresch,  2007).         Aligns  with  the  19th  century  moment     when  the  illustra/ons  of    moved  away  from  roman/c  depic/ons  of  ruins  in  a   landscape     to  a  mode  of  inventory  and  categorizing  finds.  (Galperina,  2014)       “Equip  the  Egyp/an  Ins/tute  with  two  or  three  [examples]  of  Daguerre's  apparatus,   and  before  long  on  several  of  the  large  tablets  of  the  celebrated  work,  which  had  its   incep/on  in  the  expedi/on  to  Egypt,  innumerable  hieroglyphics  as  they  are  in  reality   will  replace  those  which  now  are  invented  or  designed  by  approxima/on.  These    

38  

Well  before  Muybridge  in  1878  was  using  photography  to  deconstruct  and  recreate   ?me  through  serial  sets  of  photographs     Willeme  was  using  photography  to  deconstruct  and  recreate  spacial  rela?onships   with  2D  photographs  in  Paris  

39  

Transla?on  from  photo  to  sculpture  via  a  pantograph   h2p://en.wikipedia.org/wiki/Pantograph     Note  this  is  a  USA  patent    

40  

François  Willème   Unfinished  Photosculpture,  1859    

41  

Barry  X  Ball  has  been  using  photogrammetry  [?]   to  reproduce    (make  ‘amer’)  sculptures  in  his  Masterpieces  series,     including  ‘Hermaphroditus  asleep’,     which  includes  a  ma2ress  carved  by  Bernini  (Louvre,  n.d.).    

42  

43  

44  

45  

Clement  Valla  has  also  used  photogrammetry     to  photograph  museum  objects  for  his  2014  exhibi?on  Surface  Survey,     but  his  project  has  been  an  examina?on  of  the  structures  and  surfaces  created  by  the   technology  itself.       In  an  interview  published  in  Animal  New  York,  Valla  explained  that  he  was  invited  to   work  on  a  project  with  the  Metropolitan  Museum  Media  Lab,  where  he  discovered   their  3D  models.     I  began  taking  them  apart  to  see  how  they  had  been  constructed,  deconstruc/ng   them  into  texture  maps  produced  by  the  soSware.  They  immediately  reminded  me  of   archaeological  fragments,  bits  and  shards  of  ar/facts  to  be  reassembled  into  a   complete  whole…  and  of  archaeological  illustra/ons  from  the  late  19th  century,  at  the   moment  when  the  illustra/ons  moved  away  from  roman/c  depic/ons  of  ruins  in  a   landscape  to  a  mode  of  inventory  and  categorizing  finds.  (Galperina,  2014)   Valla  draws  an  analogy  between  the  pieces  of  texture  created  from  deconstruc?ng   the  3D  photogrammetry  files  and  the  fragments  found  on  an  archeology  dig.  He  then   compares  this  with  the  possibili?es  of  archaeology  in  the  digital  archive.  He  explored   this  by  exhibi?ng  the  texture  maps  of  found  3D  scans  alongside  unwrapped  textures   of  the  museum  object  scans  (Galperina,  2014;  Pangburn,  2014;  Transfer  Gallery,    

46  

Project  Mosul  has  actually  used  screenshots  from  the  footage  of  the  objects   destruc?on  in  the  photogrammetry  reconstruc?on     So,  to  summarise  my  point,     photogrammetry  is  one  example  of  how     new  surfaces  are  constructed  in  the  rela;onship  between  images.     In  this  case,  the  surface  is  a  composite,     a  shell  comprised  of  shards     that  have  been  compiled  by  algorithmic  computer  vision     construc;ng  the  shape  based  on  measurement  of  the  rela;onship  between  image   elements     and  extrapola;on  based  on  a  projec;on  of  perspec;ve  lens  representa;on.     The  surface  is  shaped  from  the  rela;onships  between  photographs.     The  digital  artefact  is  a  hollow  shell  that  can  be  deconstructed  into  its  cons;tuent   parts.    

47  

In  his  2007  TED  Talk,     Blaise  Agüera  y  Arcas   demonstrated  how  Photosynth     can  execute  a  form  of  photogrammetry     to  create  a  3D  model  of  Notre  Dame  Cathedral     That  was  constructed  from  a  collec?on  of  images     scraped  from  Flickr  (Agüera  y  Arcas,  2007).       He  pointed  to  a  poten;al  future     where  all  images  can  be  connected  spacially     via  their  visual  content.       This  demonstra;on  of  Photosynth     is  a  concrete  example  of  the  connec;on  between     computer  vision,  photogrammetry     and  the  use  of  search  engines  to  group  and  organise  knowledge.      

48  

Whilst  photographs  have  always  been  arranged  and  grouped,     shared,  touched  and  handled,     the  affordances  of  the  digital  environment     have  altered  the  forms  of  the  encounter     and  the  rela;onship  between  images.     Most  web-­‐based  photographs  are  hyperlinked  to  further  content.     The  viewer  is  urged  to  click  through  to  the  next  image  encounter.     Photographic  images  are  experienced  as  a  cascade    of  linked  and  interconnected  image  planes.     The  encounter  is  a  click  or  stroke,  flow  or  swipe.     This  rapid  growth  in  the  circula?on  of  photographic  images     can  be  compared  to  the  rapid  penetra?on  of  photography  in  the  19th  century    in  the  decades  following  the  announcement  of  the  daguerreotype  in  1839.     Within  months  of  Daguerre  announcing  his  technique  in  Paris,     Samuel  Morse  had  obtained  a  transla?on  of  Daguerre’s  manual    and  established  a  photographic  studio  in  the  USA  (“Divine  perfec?on,”  1999).    

49  

Natale  draws  a  correla?on  between  the  introduc?on  of  photography  to  the  USA    in  the  mid  19th  century  and  the  growth  of  communica?on  media    –  telegraphy,  railroads  and  the  postal  system  –    that  is  familiar  to  us  in  terms  of  the  contrac?ons  of  ?me,  space  and  knowledge    wrought  by  digital  communica?on  media  in  the  last  two  decades  (Natale,  2012).    The  figure  of  Samuel  Morse  is  just  one  example  of  the  entanglement  of   communica?on  and  imaging  technologies.       One  implica?on  of  this  current  phase  is  that  digital  photographs  are  no  longer   contained  as  stand-­‐alone  two  dimensional  prints  to  be  contemplated  and  considered.     Whilst  photographs  have  always  been  arranged  and  grouped,  shared,  touched  and   handled,  the  affordances  of  the  digital  environment  have  altered  the  forms  of  the   encounter  and  the  rela?onship  between  images.  Most  web-­‐based  photographs  are   hyperlinked  to  further  content.  The  viewer  is  urged  to  click  through  to  the  next  image   encounter.  Photographic  images  are  experienced  as  a  cascade  of  linked  and   interconnected  image  planes.  The  encounter  is  a  click  or  stroke,  flow  or  swipe.     boundaries  of  photography  -­‐  disolved  and  ubitqitous]      

50  

In  2011,  Google  introduced  the  ‘search  by  image’  feature  (Chris  Crum,  2011).       This  facility  differed  from  the  established  version  of  Google  Images     where  searches  were  text  based  search  terms.       Search  by  Images  is  based  on  the  image  file  itself     Google  ‘search  by  image’  is  not  the  first  reverse  image  search  engine,     TinEye  was  launched  in  2008  (TinEye,  2008)     claims  to  be  “the  first  image  search  engine  on  the  web  to  use  image  iden?fica?on   technology  rather  than  keywords,  metadata  or  watermarks”  (TinEye,  n.d.).      

51  

Samuel  Bland’s  Googlology  series     is  a  visual  conceptual  strategy     that  reverse  engineers  and  reveals     something  of  the  workings  of  the  Google  ‘search  by  image’  algorithm.       Using  his  original  photographs,     Bland  combined  the  first  twelve  ‘visually  similar’  results     from  a  Google  ‘search  by  image’  search  (Schiller,  2013).       The  search  results  were  layered    to  create  a  composite  average     that  reveals  the  workings  of  the  computer  vision  algorithm.    

52  

Bland’s  composites  clearly  illustrate  that,     at  this  ;me,     the  func;on  of  computer  vision  within  this  algorithm     does  not  comprehend  content  or  representa;on.       Whereas  tradi;onal  organiza;onal  taxonomies     might  arrange  images  according  to  their  content,     by  what  they  represent     (images  of  birds  in  one  group;     images  of  cars  in  another  group),       the  Google  ‘search  by  image’  results     connect  and  group  images     according  to  the  formal  arrangement  of  shape  and  line,     contrast  and  colour       Content  blind  

53  

This  is  somewhat  like     rearranging  all  the  books  in  a  library     according  to  their  size  rather  than  their  subject  maNer.    

54  

  Bland’s  work  is  a  concrete  example     of  the  challenge  to  representa;on  raised  by  Rubinstein  &  Fisher  (Rubinstein  and   Fisher,  2013,  p.  9).       In  this  current  ‘reverse  image  search  engine’  algorithmic  environment,       digital  photographs  may  no  longer  sorted,  organized,  associated  and  linked     according  to  their  representa;onal  content.  

55  

In  an  interview  recorded  in  1988     at  the  European  Media  Art  Fes?val  in  Osnabruck,     Flusser  described  the  geometry  of  an  emerging  paradigm     as  ‘structural’  thinking     and  predicted  that  the  implica?ons  would  be  as  significant     as  the  introduc?on  of  wri?ng  

56  

A2empts  to  visualize  the  geometry  of  the  internet     certainly  reinforces  this  idea  of  computers  as  a  medium  genera?ng  complex   networked  structures  (Meeks,  2011b).    

57  

  I  am  proposing  that  the  rela;onships  being  drawn  between  images  by  search   engine   Might  also  be  visualised  as  a  geometric  structure     rather  than  Bland’s  flaNened  composites    

58  

If  I  were  to  a2empt  to  summarize  Flusser’s  model,     we  could  say  that  the  technology  of  wri?ng  is  uni-­‐dimensional    and  enforces/facilitates  linear  thinking;  photographs,     are  two  dimensional  and  facilitate  ‘scanning’  planier  forms  of  thinking;     whereas  computer  technologies  support  a  ‘structural’  three  dimensional  complex   form  of  thought.     I  am  proposing  that  the  rela;onships  being  drawn  between  images  by  search   engine   Might  also  be  visualised  as  a  geometric  structure      

59  

Flusser  is  by  no  means  the  first  and  only  writer     to  consider  the  implica?ons  of  technology  on  culture,     knowledge  and  thought.       For  example,  Ong’s  Orality  and  Literacy     considered  the  shim  in  cultural  consciousness     between  oral  culture     and  the  impact  of  wri?ng  as  a  technology  (Ong,  1982);       Andy  Clarke,  author  of  Natural  Born  Cyborgs,     discussed  extended  mind  theory     and  the  ways  in  which  we  use  technologies     to  extend  and  facilitate  thought  (Clark,  2003).       Observa?ons  on  the  rela?onship  between  technology  and  thought     is  reiterated  and  extended  by  Rowlands     with  his  explora?on  of  thought  as  embodied,     embedded,  enacted,  and  extended  (Rowlands,  2010)  .     The  no?on  that  technologies  have  profound  impacts  on  consciousness     are  supported  by  a  number  of  studies    

60  

We  know  that  technologies  are  not  value  neutral.     Take  for  example  the  bias  embedded  in  photography     towards  causican  skin     (Roth,  2009;  “Teaching  The  Camera  To  See  My  Skin,”  n.d.).       Ted  Striphas  employs  the  term  “algorithmic  culture,”     as  “the  ways  in  which  computers,     running  complex  mathema;cal  formulae,     engage  in  what’s  oDen  considered  to  be  the  tradi;onal  work  of  culture:     the  sor;ng,  classifying,  and  hierarchizing  of  people,  places,  objects,  and   ideas”  (Granieri,  2014).      

61  

Given  the  emergence  of  sor;ng  algorithms  for  images     that  do  not  rely  on  representa;onal  content,     we  find  ourselves  at  a  moment  where  the  taxonomy     of  image  mediated  knowledge/culture  has  been  shiDed  in  a  profound  way     –  from  representa;onal  content  to  formal  visual  elements.       If  we  extend  Flusser’s  conten?on  that  technologies  impose  shape  and  structure  on   thought/knowledge  to  consider  the  implica?ons  computer  vision  mediated  search  by   image  engines,  how  might  we  begin  to  conceive  of  how  the  organizing  of  images  via   computer  vision  will  structure,  shape  and  facilitate  ways  of  knowing  and  perceiving.     -­‐-­‐-­‐-­‐-­‐  Mee?ng  Notes  (6/06/2015  11:34)  -­‐-­‐-­‐-­‐-­‐   poten?al  to  

62  

This  moment  I  have  described     in  the  rela?onship  between     digital  photographs,     computer  vision     and  reverse  image  search  engines     may  be  flee?ng,  if  not  already  past.       It  may  be  no  more  than  a  snapshot  in  a  rapid  journey.     But  snapshots  can  help  us  to  understand  where  we  have  been     and  where  we  are  going,     a  surface  ground  on  which  to  briefly     rest  and  reflect.    

63  

The  obvious  and  immediate  excep?on     to  the  content  blindness  of  computer  vision     is  the  prevalence  of  facial  recogni?on  algorithms.     Facebook’s  DeepFace  program  claims  to  have  an  accurately  comparable  to  human   cogni?on     This  may  be  a  passing  phase  given  the  emergence  of  deep  learning  AI  capable  of   associa?ng  language  with  image  content,  but  it  does  s?ll  give  us  a  moment  of  insight   into  the  implica?ons  of  the  shim  towards  a  seman?c  web.  Will  we  no?ce  the   paradigm  shim?  Given  the  connec?on  with  emo?onal  desires  met  by  smart  phones   and  social  media,  we  may  not  no?ce  as  we  pass  through  the  wormhole.          

64  

S?ll  blind  to  a  certain  extent     There  is  a  plethora  of  art  projects     that  a2empt  to  reverse  engineer,     reveal  and  subvert  this  feature  of  big  brother  surveillance.         Among  my  favourite  examples  are     Zach  Blas’  Face  Cages  (Andrew  Lasane,  2014;  Blas,  2013)       and  Onforma?ve  Studio’s  collabora?on     with  Chris?an  Loclair     Google  Faces  (onforma?ve  studio,  2013).       These  projects  point  to  an  awareness  that     as  we  look  at  computers,     computers  are  looking  at  us,     the  primary  concern  being     that  computer  vision  has  significant  implica?ons  for  privacy.       In  the  case  of  Bas’  Face  Cages,     the  shape  of  the  biometric  algorithm    

65  

There  is  some  evidence     that  the  opera?on  of  computer  vision     may  be  gaining  a  form  of  cogni?on       Not  that  we  can  really  know  if  an  AI  has  ‘cogni?on’  in  an  organic  sense     but  there  are  increasing  examples  of  automa?ng  recogni?on.     In  2012,  Google  Research  set  an  ar?ficial  intelligence  deep  learning  program     the  task  of  looking  at  Youtube  videos  in  order  to  learn  to  recognize  content.       Of  course,  the  first  thing  that  it  learnt  to  recognize  was  cats  (Clark,  2012)  

66  

In  2014  Microsom  Research     reported  on  development  of  AutoCap?on,     an  app  to  prompt  users  to  cap?on  their  images     by  sugges?ng  descrip?ons  of  the  content  (Ramnath  et  al.,  2014).       Similar  to  theories  of  human  percep?on,     AI  deep  learning  protocols  develop  and  refine     from  exposure  to  content  (Wolchover,  2014).    

67  

More  than  matching  words  with  text,     recent  work  at  UC  Berkley  has  been  mapping  brain  ac?vity  with  known  mind’s  eye   visuals.     In  2011,  a  group  of  researchers  demonstrated  some  early  results  in  work  to  literally   capture  images  from  the  mind’s  eye  using  MRI  mapping  (Anwar,  2011;  Nishimoto  et   al.,  2011).  In  her  Moonshots  presenta?on  and  TED  Talk,  Mary  Lou  Jepsen  discussed   the  achievability  of  this  technology  to  deliver  usable  results  (Solve  for  X,  2012;   Jepsen,  2013).    She  also  introduced  research  that  suggests  that  it  may  be  possible  to   get  the  receptor  neurons  in  the  re?na  to  run  in  reverse  and  literally  read  images  from   the  mind’s  eye  from  the  eye  itself.  This  proposi?on  has  interes?ng  resonance  with   the  historical  concep?on  of  vision    as  an  emission  rather  than  an  intromission.   Curiously,  this  debate  may  also  have  some  connec?on  with  phenomenological   concep?ons  of  vision.  Sadly,  I  do  not  have  the  space  to  inves?gate  this  in  depth  at   this  point  other  than  to  also  men?on  that  a  phenomenological  analysis  of  the  no?on   that  ‘the  computer  looks  back’  may  be  rich  and  to  note  that  Wiesing  (2010)  has   already  considered  the  presence  of  phenomenological  thinking  in  Flusser’s  work.     Tesla’s  1933    proposi?on  for  a  ‘thought  camera’  may  not  be  so  kooky  amer  all.    

68  

Tesla’s  1933    proposi?on  for  a  ‘thought  camera’  [reference]  may  not  be  so  kooky   amer  all.     1933  Deseret  News     -­‐-­‐-­‐-­‐-­‐  Mee?ng  Notes  (6/06/2015  14:37)  -­‐-­‐-­‐-­‐-­‐     light  field  AR/VR         h2p://placefacecyberspace.net/2015/03/05/thought-­‐camera/    

69  

To  summarise:     Technologies  create  paradigms     computer  vision  is  media?ng  the  emergence  of  forms  of  surface  structure  between   digital  photographs        

70  

I  am  not  a2emp?ng  to  predict  precisely     what  the  implica?ons  will  be     But  no?ng  that  computer  vision  mediated  rela?onships  between  images     has  the  poten?al  to  reshape  the  structure  of  thought.     I  propose  that  MESH  is  a  fer?le  metaphor  with  which  to  consider  the  shape  of  this   emerging  algorithmic  image  environment     It  describes  the  way  in  which  surfaces  are  built  between  images     –  both  photogrammatry  and  reverse  image  search  engines   It  also  describes  the  embeddedness,  the  enmeshment  of  these  structures  in  our   culture.       -­‐-­‐-­‐-­‐-­‐  Mee?ng  Notes  (6/06/2015  11:34)  -­‐-­‐-­‐-­‐-­‐   it  also  points  to  the  complexity  of  the  structure  and  surfaces  generated  by  an   algorthimic  photographic    environment     the  surface  is  an  approxima?on,  a  sugges?on,  that  become  porous  as  we  approach      

71  

Natale,  S.,  2012.  Photography  and  Communica?on  Media  in  the  Nineteenth  Century.  Hist.  Photogr.  36.   Na?onal  Gallery,  n.d.  Philippe  de  Champaigne  and  studio  |  Triple  Portrait  of  Cardinal  de  Richelieu  |  NG798  |  The  Na?onal   Gallery,  London  [WWW  Document].  Na?onalGallery.org.uk.  URL  h2p://www.na?onalgallery.org.uk/pain?ngs/philippe-­‐de-­‐ champaigne-­‐and-­‐studio-­‐triple-­‐portrait-­‐of-­‐cardinal-­‐de-­‐richelieu  (accessed  1.2.15).   Nishimoto,  S.,  Vu,  A.T.,  Naselaris,  T.,  Benjamini,  Y.,  Yu,  B.,  Gallant,  J.L.,  2011.  Reconstruc?ng  Visual  Experiences  from  Brain   Ac?vity  Evoked  by  Natural  Movies.  Curr.  Biol.  1641.  doi:10.1016/j.cub.2011.08.031   onforma?ve  studio,  2013.  Google  Faces.   Ong,  W.J.,  1982.  Orality  and  literacy :  the  technologizing  of  the  word,  New  accents.  London ;  New  York :  Methuen,  1982.   Pangburn,  D.,  2014.  Become  A  Digital  Archaeologist  With  Clement  Valla’s  “Surface  Survey”  Exhibi?on  [WWW  Document].  Creat.   Proj.  URL  h2ps://thecreatorsproject.vice.com/blog/become-­‐a-­‐digital-­‐archaeologist-­‐with-­‐clement-­‐vallas-­‐surface-­‐survey-­‐ exhibi?on  (accessed  12.28.14).   Payne,  M.,  n.d.  Mimobase  [WWW  Document].  h2p://mimobase.com/.  URL  h2p://mimobase.com/  (accessed  1.4.15).   Photoshop  CS3:  Final  release  details  [WWW  Document],  2007.  .  Digit.  Photogr.  Rev.  URL  h2p://www.dpreview.com/news/ 2007/3/27/pscs3  (accessed  5.31.14).   Prosthe?c  Knowledge,  2014.  3D  Printed  ArcheAge  [WWW  Document].  Prosthet.  Knowl.  URL  h2p:// prosthe?cknowledge.tumblr.com/post/98012948036/3d-­‐printed-­‐archeage-­‐curious-­‐li2le-­‐project-­‐by  (accessed  3.16.15).   Ramnath,  K.,  Baker,  S.,  Vanderwende,  L.,  El-­‐Saban,  M.,  Sinha,  S.,  2014.  AutoCap?on:  Automa?c  Cap?on  Genera?on  for  Personal   Photos.  Microsom  Research.   Roth,  L.,  2009.  Looking  at  Shirley,  the  Ul?mate  Norm:  Colour  Balance,  Image  Technologies,  and  Cogni?ve  Equity.  Can.  J.   Commun.  34,  111–136.   Rowlands,  M.,  2010.  The  new  science  of  the  mind :  from  extended  mind  to  embodied  phenomenology.  Cambridge,  Mass. :  MIT   Press,  c2010.   Royal  Collec?on,  2013.  In  Fine  Style:  The  Art  of  Tudor  and  Stuart  Fashion  [WWW  Document].  R.  Collect.  URL  h2p:// www.royalcollec?on.org.uk/exhibi?ons/in-­‐fine-­‐style-­‐the-­‐art-­‐of-­‐tudor-­‐and-­‐stuart-­‐fashion-­‐0  (accessed  3.16.15).   Royal  Collec?on,  c1635.  Sir  Anthony  van  Dyck  (1599-­‐1641)  -­‐  Charles  I  (1600-­‐1649)  [WWW  Document].  RoyalCollec?on.org.uk.   URL  h2p://www.royalcollec?on.org.uk/collec?on/404420/charles-­‐i-­‐1600-­‐1649  (accessed  1.2.15).   Royal  Collec?on,  n.d.  A2ributed  to  Jan  Blommendael  (c.1650-­‐1699)  -­‐  Charles  I  (1600-­‐1649)  [WWW  Document].   RoyalCollec?on.org.uk.  URL  h2p://www.royalcollec?on.org.uk/collec?on/35856/charles-­‐i-­‐1600-­‐1649  (accessed  1.2.15).   Rubinstein,  D.,  Fisher,  A.,  2013.  Introduc?on:  On  the  Verge  of  Photography,  in:  On  the  Verge  of  Photography:  Imaging  Beyond   Representa?on.  ARTicle  Press,  Birmingham.   Schiller,  J.,  2013.  Google  Is  Alive,  It  Has  Eyes,  and  This  Is  What  It  Sees  |  Raw  File  [WWW  Document].  WIRED.  URL  h2p:// www.wired.com/2013/05/sam-­‐bland-­‐google-­‐goggles/  (accessed  5.31.14).  

72