IRC, freenode, #hurd, 2011-07-13

<braunr> does anyone know if posix (or mach) has requirements or a policy
  about the placement of allocations of virtual space ?
<braunr> a policy such as bottom-up ?
<braunr> or "find lowest vailable space" ?
<jkoenig> braunr, you mean for vm_allocate ? You may want to check mmap()
  but I can't remember ever coming across such a thing (except maybe
  wrt. alignment)
<braunr> i was wondering how e.g. libraries are linked near the stack
  (possibly at slightly random addresses)
<braunr> does the linker walk the address space entries top-down ?
<braunr> jkoenig: i didn't see anything either in the mach interface, but i
  may have missed something
<braunr> jkoenig: most systems i've been studying mark the vm regions for
  the heap and the stack
<braunr> but for mach, the stack is just allocated virtual memory at the
  top of the space
<braunr> so the "placement policy" is either completely outside the kernel,
  or relies on its interface
<jkoenig> braunr, actually I'm surprised Mach would even dictate where the
  (one?) stack should be, I would have expected  it to be the job of
  whatever creates a thread to make this kind of choice
<braunr> jkoenig: threads have their own stacks, under the responsibility
  of the user trhead implementation
<braunr> but a program usually needs a stack even before it runs
<braunr> i had to set one to bootstrap modules in v0.1
<braunr> but i wonder if it's just for bootstrapping (and then propagated
  by fork()) or part of the interface
<braunr> but this doesn't matter much actually, the allocation mechanism i
  have in mind can actually support multiple policies
<jkoenig> I would guess the former (just for bootstrapping), since a new
  task has no thread, and a new thread has no state. (but I'm no expert)
<braunr> i think so
<braunr> i'll have a look at the exec server
<braunr> jkoenig: did the previous implementation of procfs show task maps
  ?
<jkoenig> braunr, I don't think so, I would probably have felt compelled to
  include them in the new one if it did :-)
<braunr> hmmm
<braunr> we definitely need that
<jkoenig> is there a compelling use case you think about in particular?
<braunr> yes
<braunr> i failed to understand how gnumach behaved wrt ipc right spaces

rework gnumach ipc spaces

<braunr> and when i did, i found out my work was impossible to integrate
<jkoenig> "ipc right spaces" ?
<braunr> each task have an ipc space, which contains righs
<braunr> rights
<braunr> the ipc translation layer converts space/name and space/port
  tuples to rights
<braunr> i wanted to replace the splay tree with a radix tree but didn't
  get how the ipc table made the splay tree almost unused
<braunr> i don't want to make this kind of mistake again, so i'd like a
  clear and detailed view of the vm spaces
<braunr> (it's only compelling for myself, all right)
<braunr> but
<braunr> we have vminfo
<braunr> rbraun@nordrassil:~$ vminfo $$ | wc -l
<braunr> 1046
<braunr> oh my
<braunr> in comparison, a firefox instance has less than 500 on linux
<jkoenig> you mean there's some kind of port name table (or functional
  equivalent) which actually resides in the task's memory? (and that's what
  shows up at the beginning of the address space with prot=0?)
<braunr> jkoenig: sorry for being confusing, it's not that at all
<jkoenig> (btw feel free to tell me to just go read the source or whatever)
<braunr> jkoenig: don't worry
<jkoenig> braunr, no problem
<braunr> jkoenig: i just compared a previous attempt to improve gnumach
  which failed because i didn't have enough insight of the inner workings
  of the kernel
<braunr> jkoenig: i really want to miss as little as possible on the vm
  part, so having detailed information about what actually happens on
  running hurd systems is something i need

IRC, freenode, #hurd, 2011-07-24

<braunr> oh btw, i noticed there are many mappings below the program text
<braunr> most notably, the stack
<braunr> except for special applications like wine, could this break
  anything ?
<braunr> i also wonder how libraries are mapped, because there is nothing
  to perform top-down allocations
<braunr> which means if the region below the program text is exhausted,
  libraries could be mapped right after the heap
<youpi> it shouldn't break anything except things like wine & libgc, yes
<braunr> which could make malloc() fail :/