[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[pci] The updated wbw_wbr_fifos.v



Hello

I beleive I have had the same problems as Luigi with corrupt data. My 
wishbone is however operating at 40 MHz. The corrupted data is not 
random but seem to be the address of the last transaction. This happens 
rarely and has only been observed during simulation. The revision of 
wbw_wbr_fifos.v I use is:
// Revision 1.9  2002/10/18 03:36:37  tadejm
// Changed wrong signal name scanb_sen into scanb_en.

Now I want to try the latest  revision but how can I easily use it not 
having to change all rtls (I have made quite a lot of changes to the 
core). The problem as I can see it are the new name conventions. Will 
the new revision work with the old files if I add "pci_" to all name 
where its missing in the new file.

This is the diff I ran on both files

Kind regards, Daniel

bash-2.05a$ diff pci_wbw_wbr_fifos.v 
../../../../tarfiles/pci/rtl/verilog/wbw_wbr_fifos.v
44,50c44
< // $Log: pci_wbw_wbr_fifos.v,v $
< // Revision 1.2  2003/01/30 22:01:09  mihad
< // Updated synchronization in top level fifo modules.
< //
< // Revision 1.1  2003/01/27 16:49:31  mihad
< // Changed module and file names. Updated scripts accordingly. FIFO 
synchronizations changed.
< //
---
 > // $Log: wbw_wbr_fifos.v,v $
86c80
< module pci_wbw_wbr_fifos
---
 > module WBW_WBR_FIFOS
353c347
<     pci_wb_tpram #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbw_fifo_storage
---
 >     WB_TPRAM #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbw_fifo_storage
384c378
<     pci_wb_tpram #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbr_fifo_storage
---
 >     WB_TPRAM #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbr_fifo_storage
448c442
<     pci_wb_tpram #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbu_fifo_storage
---
 >     WB_TPRAM #(`WB_FIFO_RAM_ADDR_LENGTH, 40) wbu_fifo_storage
483c477
< pci_wbw_fifo_control #(WBW_ADDR_LENGTH) wbw_fifo_ctrl
---
 > WBW_FIFO_CONTROL #(WBW_ADDR_LENGTH) wbw_fifo_ctrl
500c494
< pci_wbr_fifo_control #(WBR_ADDR_LENGTH) wbr_fifo_ctrl
---
 > WBR_FIFO_CONTROL #(WBR_ADDR_LENGTH) wbr_fifo_ctrl
540,557d533
< wire [(WBW_ADDR_LENGTH-2):0] pci_clk_sync_inGreyCount ;
< reg  [(WBW_ADDR_LENGTH-2):0] pci_clk_inGreyCount ;
< synchronizer_flop #((WBW_ADDR_LENGTH - 1)) i_synchronizer_reg_inGreyCount
< (
<     .data_in        (inGreyCount),
<     .clk_out        (pci_clock_in),
<     .sync_data_out  (pci_clk_sync_inGreyCount),
<     .async_reset    (1'b0)
< ) ;
<
< always@(posedge pci_clock_in or posedge wbw_clear)
< begin
<     if (wbw_clear)
<         pci_clk_inGreyCount <= #`FF_DELAY 1 ;
<     else
<         pci_clk_inGreyCount <= # `FF_DELAY pci_clk_sync_inGreyCount ;
< end
<
591c567,581
< assign wbw_transaction_ready_out = pci_clk_inGreyCount != outGreyCount ;
---
 > // synchronize transaction ready output to reading clock
 > // transaction ready is set when incoming transaction count is not 
equal to outgoing transaction count (what goes in must come out logic)
 > // transaction ready is cleared when whole transaction is pulled out 
of fifo (otherwise it could stay set for additional cycle and result in 
wrong op.)
 > wire wbw_transaction_ready_flop_i = inGreyCount != outGreyCount ;
 >
 > meta_flop #(0) i_meta_flop_wbw_transaction_ready
 > (
 >     .rst_i      (wbw_clear),
 >     .clk_i      (pci_clock_in),
 >     .ld_i       (out_count_en),
 >     .ld_val_i   (1'b0),
 >     .en_i       (1'b1),
 >     .d_i        (wbw_transaction_ready_flop_i),
 >     .meta_q_o   (wbw_transaction_ready_out)
 > ) ;
bash-2.05a$

--------------------

        Are you using the latest version of the core?
        Synchronization problems in FIFO were reported before and I've
        changed it
        recently - maybe a month ago.
        If you are using the latest version, can you please tell me, how
        are these
        400 transactions distributed?
        I mean - are there any wait cycles inserted, are they fast back
        to back, how
        many cycles are there between the last data and the next address
        phase?
        I would need this information to come up with a testcase
        simulating your
        situation - maybe it is plain RTL bug, not a sync issue.

        Regards,
        Miha Dolenc

        ----- Original Message -----
        From: <lnds@h...
        <http://www.opencores.org/forums/pci/2003/02/00012/post?to=948505d2c97b5cbe934d58f0929f3546>>
        To: <pci@o...
        <http://www.opencores.org/forums/pci/2003/02/00012/post?to=924eef8dd4acbffc990b1f1c77ed7773>>
        Sent: Wednesday, February 19, 2003 1:08 PM
        Subject: [pci] Data Corruption writing into Target


         > I'm seeing some data corruption when writing from a fast
        master into
         > my slow opencores target (20Mhz wishbone clock, multicycles
        to process
         > data).
         >
         > I'm doing transfers in chunks of 400 words, not bursting, and
        I see on
         > average one word corrupted out of these 400 words. The
        corrupted word
         > appears to be scattered randomly, and all data before and
        after this
         > word is correct.
         >
         > After playing around with the base address I've found that
        the upper
         > bits of the corrupted data word match and change as the base
        address
         > changes, so I'm convinced that the corrupted data word is in
        fact a
         > PCI address coming out of the wishbone data port, instead of
        the real
         > data.
         >
         > Slowing down the master to be slower than the slave seems to
         > completely eliminate the problem.
         >
         > Miha wrote previously that "A bug was found in PCI Target
        path of the
         > bridge in case when external PCI master was able to produce a
        lot more
         > data than WISHBONE slave connected to the bridge could sink."
         >
         > Are there still believed to be problems here, is this a
        symptom of the
         > same problem, or other sync problems in the FIFOs?
         >
         > Thanks,
         > Ludi.
         >
         >



--
To unsubscribe from pci mailing list please visit http://www.opencores.org/mailinglists.shtml