[Haizea] Experimenting with Real Workload-

Mehdi Sheikhalishahi mehdi.alishahi at gmail.com
Thu May 13 11:04:13 CDT 2010


I understand that based on the workload we should also prepare a site file
to describe specific details about site. So since at the end
of haizea_swf2lwf's run method there are some calculation for
total_capacity, why not we make -n option a mandatory option or another
solution would be to use MaxNodes and MaxProcs headers in SWF to learn
about site_num_nodes.
Now if we don't do some provisioning we will face some errors.

total_capacity = site_num_nodes * (to_time - from_time).seconds

; Version: 2.2

; Computer: Intel iPSC/860
; Installation: NASA Ames Research Center
;
; Acknowledge: Bill Nitzberg
; Information: http://www.nas.nasa.gov/
;              http://www.cs.huji.ac.il/labs/parallel/workload/
;
; Conversion: Dror Feitelson (feit at cs.huji.ac.il) 1 Aug 2006
; MaxJobs: 42264
; MaxRecords: 42264
; Preemption: No
; UnixStartTime: 749458803
; TimeZone: -28800
; TimeZoneString: US/Pacific
; StartTime: Fri Oct 01 00:00:03 PDT 1993
; EndTime:   Fri Dec 31 23:03:45 PST 1993
; MaxNodes: 128
; MaxProcs: 128
; Note: There is no information on wait times - the given submit
;       times are actually start times
; Note: group 1 is normal users
;       group 2 is system personnel
; Note: there is no data about batch queues
;
; Note: This is a cleaned version of the log!
;	The filter used to produce it was
;	user=3 and application=1 and processors=1 (24,025 jobs removed)
;
; MaxQueues: 2
; Queue: 0 interactive
; Queue: 1 batch


   if self.opt.site != None:
            site_elem = ET.parse(self.opt.site).getroot()
            site_num_nodes =
int(site_elem.find("nodes").find("node-set").get("numnodes"))
            root.append(site_elem)


On Thu, May 13, 2010 at 12:02 AM, Borja Sotomayor <borja at borjanet.com>wrote:

> Hi,
>
> > What steps we should take to experiment with Real Workload such
> as Parallel
> > Workloads Archive?
> > What facilities provided within Haizea?
>
> The 1.1 branch (
> https://phoenixforge.cs.uchicago.edu/repositories/browse/haizea/branches/1.1
> )
> includes a command called haizea-swf2lwf that converts SWF files (the
> format used in the Parallel Workloads Archive) to LWF files. Once
> you've got the LWF file, it's just a question of running experiments
> as described in the Haizea documentation.
>
> Unfortunately, the 1.1 branch is still experimental and still not well
> documented. Running "haiea-swf2lwf -h" will provide some information
> on the command's parameters, although what some of them do might not
> be immediately apparent (the best documentation I can offer at this
> point is the code itself; haizea-swf2lwf is implemented at the end of
>
> https://phoenixforge.cs.uchicago.edu/repositories/entry/haizea/branches/1.1/src/haizea/cli/commands.py
> )
>
> The 1.1 branch will eventually become the stable 1.2 version (with
> proper documentation, etc.), and I'm currently aiming for releasing a
> 1.2 beta sometime during the summer. However, I don't have a concrete
> timeline yet.
>
> Cheers!
> --
> Borja Sotomayor
> PhD Candidate in Computer Science, University of Chicago
> http://people.cs.uchicago.edu/~borja/
> _______________________________________________
> Haizea mailing list
> Haizea at mailman.cs.uchicago.edu
> https://mailman.cs.uchicago.edu/mailman/listinfo/haizea
>



-- 
Regards,
Mehdi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/haizea/attachments/20100513/da371926/attachment.htm 


More information about the Haizea mailing list